-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dockerd panic: runtime error: slice bounds out of range on CentOS 7 #26384
Comments
Hello, I'm having a similar issue: Running a 1.12.1 swarm cluster with 5 managers and 2 nodes, hosting one service. Containers are normally deployed all across the cluster, between managers and nodes. After a random number of minutes Docker on one or both of the nodes is doing down. The rest of the cluster (meaning all the managers) is fine, the containers that were hosted on the dead node(s) are being rescheduled on the managers, so the total number of containers in the cluster remains the same (after the moved ones become available). 'docker node ls' on the managers indicates the node(s) state as "Down".
ssh into the nodes and doing a 'docker ps' will hang for a couple of seconds, then that will awaken Docker again and output the docker ps header with no containers. The outcome above has been replicated a number of times, on brand new deployed clusters with different AWS instances (same config scripted via terraform) The context is: Log output and debug info: This is the log on one of the nodes after going down:
Docker version:
Docker info:
Please let me know if any other debug info are needed and thanks for helping! |
This is caused by a bug in stdout / stderr processing in Docker 1.12's health checks. Until the next release is available, a workaround to prevent dockerd from panicking is to make sure that stdout and stderr aren't written to at the same time by the command(s) used in a HEALTHCHECK. e.g.
Alternatively, if you would still like to see healthckeck output in
|
Great, will give this a try, thanks @drakenator Update: this workaround is working well on my setup, thanks |
Looks like this is resolved in the 1.12.x branch, and will be fixed on master through #26596 |
Thanks for the quick fix & workaround @drakenator . I'm testing it since yesterday: so far so good! |
Thanks to all of you, I meet this bug just now and this issue helps me so much. |
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.):
physical machine
Steps to reproduce the issue:
Describe the results you received:
After starting the application in pypy container the dockerd panics with the following error:
Describe the results you expected:
dockerd should not panic.
Additional information you deem important (e.g. issue happens only occasionally):
Intermittent problem.
The text was updated successfully, but these errors were encountered: