Skip to content

Logical backup pods takes huge resource requests and limit from postgres cluster #1939

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
gbarazer opened this issue Jun 23, 2022 · 4 comments

Comments

@gbarazer
Copy link

Please, answer some short questions which should help us to understand your problem / question better?

The logical backup cronjobs defines resource requests and limits from the cluster their are linked to which can be huge with big postgres deployments. This is inappropriate as the resource consumption of the logical backup pods are quite low and stable, and not related to the resource defined for the postgres cluster.

E.g. : If my cluster is defined with a 20 CPU and 200Gi RAM resource request, the logical backup job will define the same requests where in practice the pod itself consumes barely 1CPU (for compression mainly) and less than 1Gi of RAM.

  • Which image of the operator are you using? registry.opensource.zalan.do/acid/postgres-operator:v1.8.0
  • Where do you run it - cloud or metal? Kubernetes or OpenShift? [on premise virtualisation / Xen K8S]
  • Are you running Postgres Operator in production? yes
  • Type of issue? Bug report

A proposed solution would be to have sane defaults for resource requests and limits for the logical backup jobs, and optionnally be able to define these in the cluster CR directly (as well as other useful options like the retry backoff limits and job history limits)

@FxKu
Copy link
Member

FxKu commented Jun 29, 2022

This is a known problem since #688. With #710 I had the idea to only pass default resources, but I guess the best would be to have dedicated options for the logical backup pod. Maybe from the manifest then.

@rsaphala
Copy link

rsaphala commented Jul 5, 2022

any updates on this? we are having the same issue

@dcardellino
Copy link

Same issue here. Anything new here?

@BjoernPetersen
Copy link

I think this can be closed, right? While it's not possible to configure the resources per-cluster, it's at least possible to define them for all logical backup pods of the operator for a while.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants