-
Notifications
You must be signed in to change notification settings - Fork 889
chore: add global caching to rbac #7439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
thoughts on this approach @Emyrk? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not opposed to this. I'm curious what kind of savings this would yield. We should not use the full json payload as a key though.
The caching per request was intentional as a single request often does the same rbac check a lot. Example is fetching a workspace. You have to fetch the workspace build and the provisioner job, which all do the same rbac check on the workspace.
Going global means that we might lose this savings per request if we get more than the 4096 requests in the same time of a single request. Probably unlikely? So it's probably ok?
Just something to think about. Since the per request cache will guarantee duplicate hits. Might be overkill to have both a per request cache and global cache.
coderd/rbac/authz.go
Outdated
_ = enc.Encode(subject) | ||
_ = enc.Encode(action) | ||
_ = enc.Encode(object) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is unfortunate. The json payload is quite large, and that being a key would be even more verbose.
It would be better to "hash" the fields somehow to reduce this.
I intentionally made the file astvalue.go
to avoid json.Marshalling this data when going to the rego evaluator.
We can use the role name for the hash and ignore the permissions. Whatever "hash" function we create will be faster than a json.Marshal I bet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main thing is to make sure the unique hash would be changed if the user changes roles/groups. Lmk if you want me to write this, I can get you a much smaller key.
@Emyrk — I have a benchmark that is just calling |
From a correctness POV, as long as our unique key contains all the information required to match a unique RBAC check, then it will be correct in all future calls too. So this can work for sure 👍. |
@Emyrk very happy to hear that :) |
@ammario If we go to 64k, then I have no issues with dropping the request cache. Can I write a hash function and commit it to this branch? |
@Emyrk go for it |
This is useful for caching purposes. Order matters for the hash key for slices, so a slice with the same elements in a different order is a different hash key. Athlough they are functionally equivalent, our slices have deterministic orders in the database, so our subjects/objects will match. There is no need to spend the time or memory to sort these lists.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
I'm assuming all calls to Authorize are pure in this implementation.