Skip to content

Increase inotify watchers in docs #694

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Nov 4, 2021
Merged

Conversation

mterhar
Copy link
Contributor

@mterhar mterhar commented Oct 28, 2021

@mterhar mterhar requested a review from jawnsy October 28, 2021 15:08
@github-actions
Copy link
Contributor

github-actions bot commented Oct 28, 2021

✨ Coder.com for PR #694 deployed! It will be updated on every commit.

@@ -189,7 +189,7 @@ spec:
command:
- sysctl
- -w
- fs.inotify.max_user_watches=16384
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure we should do this, because it also impacts kernel memory utilization (all the issues listed in the documentation). If we are going to change this, we should indicate (as a comment) how much memory this will require.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jawnsy @mterhar From the Slack conversation provided in the PR description, it sounds like we shouldn't recommend the max? Could we up this number without getting into the billions (if so, what's a good number? 2.5 million-ish as Kyle suggested?) , plus add the following information?

Each "watch" costs roughly 90 bytes on a 32bit kernel, and roughly 160 bytes on a 64bit one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe the solution is to make it an env var, add a comment with a calculation of memory usage, and use a larger default. 2 or 4 million does seem more tractable

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think 2 mil is a good place, but is there a reason to use an env var in this instance? Pardon my ignorance, but when I was looking at how to set one up for a container, it seemed to be adding complexity in this case.

Also, which memory allocation should be bumped, the requests or the limits?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The memory is kernel memory, and not accounted for in the memory control group, hence it isn't reflected in the request/limit settings. This means that large values can allow a user to consume all memory on the system. I've updated the example to default to 10 million watches, which is a large number and still a reasonable default.

@@ -189,7 +189,7 @@ spec:
command:
- sysctl
- -w
- fs.inotify.max_user_watches=16384
- fs.inotify.max_user_watches=2147000000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- fs.inotify.max_user_watches=2147000000
- fs.inotify.max_user_watches=2097152

@@ -189,7 +189,7 @@ spec:
command:
- sysctl
- -w
- fs.inotify.max_user_watches=16384
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think 2 mil is a good place, but is there a reason to use an env var in this instance? Pardon my ignorance, but when I was looking at how to set one up for a container, it seemed to be adding complexity in this case.

Also, which memory allocation should be bumped, the requests or the limits?

@khorne3 khorne3 merged commit fb8d0fc into main Nov 4, 2021
@khorne3 khorne3 deleted the mterhar/2billion-watchers branch November 4, 2021 22:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants