Skip to content

How to allocate hugepages resource in postgres-operator #1549

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
sskurapati opened this issue Jul 6, 2021 · 5 comments · Fixed by #2311
Closed

How to allocate hugepages resource in postgres-operator #1549

sskurapati opened this issue Jul 6, 2021 · 5 comments · Fixed by #2311

Comments

@sskurapati
Copy link

Please, answer some short questions which should help us to understand your problem / question better?

  • Which image of the operator are you using? registry.opensource.zalan.do/acid/postgres-operator:v1.6.3
  • Where do you run it - cloud or metal? Kubernetes or OpenShift? k8s on centos 7
  • Are you running Postgres Operator in production? no
  • Type of issue? question

We are running all our services on centos 7 using docker-compose, now we are planning to use kubernetes.
I am running our pods on k8s on centos 7, and some of our pods require hugepages, so we enabled hugepages on the machine.

We are trying to use the Postgres operator for data-store deployment on Kubernetes. I am not able to allowcate hugepages resources in the operator.

While creating my own Postgres pod I can pass hugepages in resources like

resources:
      limits:
        cpu: 100m
        memory: 100Mi
        hugepages-2Mi: "300Mi"
      requests:
        cpu: 100m
        memory: 50Mi
        hugepages-2Mi: "300Mi"

How can I achieve it with Postgres-operator?

I am facing the following issue while creating Postgres cluster with operator:

selecting default time zone ... Etc/UTC
creating configuration files ... ok
running bootstrap script ... Bus error (core dumped)
child process exited with exit code 135
initdb: removing data directory "/home/postgres/pgdata/pgroot/data"
pg_ctl: database system initialization failed
2021-07-05 13:02:27,414 INFO: removing initialize key after failed attempt to bootstrap the cluster
Traceback (most recent call last):
  File "/usr/local/bin/patroni", line 33, in <module>
    sys.exit(load_entry_point('patroni==2.0.2', 'console_scripts', 'patroni')())
  File "/usr/local/lib/python3.6/dist-packages/patroni/__init__.py", line 170, in main
    return patroni_main()
  File "/usr/local/lib/python3.6/dist-packages/patroni/__init__.py", line 138, in patroni_main
    abstract_main(Patroni, schema)
  File "/usr/local/lib/python3.6/dist-packages/patroni/daemon.py", line 100, in abstract_main
    controller.run()
  File "/usr/local/lib/python3.6/dist-packages/patroni/__init__.py", line 108, in run
    super(Patroni, self).run()
  File "/usr/local/lib/python3.6/dist-packages/patroni/daemon.py", line 59, in run
    self._run_cycle()
  File "/usr/local/lib/python3.6/dist-packages/patroni/__init__.py", line 111, in _run_cycle
    logger.info(self.ha.run_cycle())
  File "/usr/local/lib/python3.6/dist-packages/patroni/ha.py", line 1457, in run_cycle
    info = self._run_cycle()
  File "/usr/local/lib/python3.6/dist-packages/patroni/ha.py", line 1351, in _run_cycle
    return self.post_bootstrap()
  File "/usr/local/lib/python3.6/dist-packages/patroni/ha.py", line 1247, in post_bootstrap
    self.cancel_initialization()
  File "/usr/local/lib/python3.6/dist-packages/patroni/ha.py", line 1240, in cancel_initialization
    raise PatroniFatalException('Failed to bootstrap cluster')
patroni.exceptions.PatroniFatalException: 'Failed to bootstrap cluster'
/run/service/patroni: finished with code=1 signal=0
@sskurapati sskurapati changed the title How to allocate hugepages resource in postgres-oprator How to allocate hugepages resource in postgres-operator Jul 6, 2021
@sskurapati
Copy link
Author

sskurapati commented Jul 8, 2021

After applying this patch on statefulset, Postgres container is creating successfully.

kubectl patch statefulset data-store --type='json' -p='[
{   
    "op": "replace", 
    "path": "/spec/template/spec/containers/0/resources/requests/hugepages-2Mi", "value":"50Mi"
},
{
    "op": "replace", 
    "path": "/spec/template/spec/containers/0/resources/limits/hugepages-2Mi", "value":"50Mi"
}]'

From this what I understand is, if we pass hugepages as a resource while creating a pod it’s working, and I don’t see an option in Postgres-operator to set it.

please correct me if I am wrong.

@sskurapati
Copy link
Author

Even on kubernetes document, it is mentioned as we have to request a resource for hugepages.
https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/

Is there any plan to support the feature to set hugepages resource limit from a manifest file?

@sskurapati
Copy link
Author

@alexeyklyukin @FxKu @RafiaSabih @mkabilov @Jan-M
Can someone please reply?

@marcellodesales
Copy link

@sskurapati any status on this?

@silenium-dev
Copy link
Contributor

I guess the simplest way to do this, is passing through the hugepages-2Mi and hugepages-1Gi resource requests and limits from the postgres resource to the stateful set. I will look into it and maybe submit a pull request. This was also the suggestion from #1788

silenium-dev added a commit to silenium-dev/postgres-operator that referenced this issue Apr 28, 2023
@FxKu FxKu closed this as completed in #2311 Jan 4, 2024
FxKu added a commit that referenced this issue Jan 4, 2024
… to the statefulset (#2311)

* Add hugepages-2Mi and 1Gi to ResourceDescription type and crd (#1549, #1788)
* Add tests for hugepages resource requests/limits
* Add tests for hugepages resource requests/limits on sidecars, too
* Add docs for hugepages support
* Add link to kubernetes docs on hugepages
* Add tests for hugepages not being set on container if not requested in custom resource
* Add hugepages resources fields to manifest docs
* Add hugepages resources fields to complete manifest example
* Add hugepages resources fields to chart crd

---------

Co-authored-by: Felix Kunde <felix-kunde@gmx.de>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants