Skip to content

[Cache] Throw if Redis does not use supported eviction policies #34178

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

andrerom
Copy link
Contributor

Q A
Branch? 4.4
Bug fix? yes
Deprecations? no
Tickets n.a.
License MIT
Doc PR n.a.

Adds validation to make sure Redis has been setup with the supported eviction policy to avoid surprises when cache suddenly is inconsistent. This is done in similar way to check for Redis version in 4.3, but in this case I opted to throw in constructor instead of doing it on demand and log warning only.

WDYT @nicolas-grekas ?

TODO:

  • Test & adapt test setups for this

@nicolas-grekas
Copy link
Member

nicolas-grekas commented Oct 29, 2019

Makes sense. Does this make a request to the server? I suppose yes. Are we OK with this extra roundtrip? Why? :)

@andrerom
Copy link
Contributor Author

andrerom commented Oct 30, 2019

Does this make a request to the server? I suppose yes.

yes indeed, just like the Redis Server version check we had for sPOP (but that was lazy)

Are we OK with this extra roundtrip? Why? :)

I'm not sure if it is ok or not tbh. Two possible pros:

  • A lot of people overlook the max memory requirement & end up with "strange" cache issues
    • But on eZ side we kind of plan to tackle this with some auditing / go live check tools that point out mistakes in system configuration, so maybe that is rather the right place.
  • Tags in our case consume a lot of memory (working on reducing it atm), so I'm considering if we should set (higher) expiry on tags as well when we detect volatile-ttl setting so they don't end up being un-freeable chunk of memory that in some cases just rot (aka "never" invalidated on, so just sitting there)

=> In last case we can move this logic to be on-demand on first action that needs to save tag changes. We can also change it to be warning logged instead then like was the case with sPOP.

/cc @Toflar

@nicolas-grekas
Copy link
Member

nicolas-grekas commented Oct 30, 2019

A lot of people overlook the max memory requirement & end up with "strange" cache issues

Which interface do they use? CacheInterface::get() is allowed to throw, so maybe it should, when saving (ie when calling the callback)?

For other methods, none can throw, so a warning on save() using CacheItem::log() would work for me. If we want to be more aggressive, save() could refuse to save anything until this is resolved. Would that help?

@andrerom
Copy link
Contributor Author

which interface do they use? CacheInterface::get() is allowed to throw, so maybe it should, when saving (ie when calling the callback)?

They don't use it directly, they use eZ and we use PSRs where where can (and plan to continue doing so until someone standardizes Symfony Contracts within PSRs :P ).

Anyway, this would be on save, as that is where we can opt to set expiry for tags

If we want to be more aggressive, save() could refuse to save anything until this is resolved. Would that help?

I guess main options are:

  • Just log, like we did for sPOP
  • Log and refuse to save

@Toflar
Copy link
Contributor

Toflar commented Oct 30, 2019

* A lot of people overlook the max memory requirement & end up with "strange" cache issues

What does "strange" mean here? I mean I get the point and I do agree that setting an eviction policy makes sense but I'm not really sure this is a Redis specific issue, is it?
I theory, I might also run into "strange" issues if I run out of disk space using the FilesystemAdapter, wouldn't I? :)

@andrerom
Copy link
Contributor Author

andrerom commented Oct 30, 2019

@Toflar we are perhaps talking about different issues here, to list them all up:

  1. Running out of memory with no-eviction => no new cache can be stored
  2. Running out of memory with volatile-* when tags consume all memory and given they have no expiry (this is on purpose to avoid 3.) they can not be freed by Redis => no new cache can be stored
  3. Running RedisTagAwareAdapter with unsupported eviction policy like allkeys-lru, meaning tags will be evicted first as they are seldom interacted with => invalidation will fail => your application will serve stale data

I'm first and foremost trying to solve people ending up with 3. here, but also contemplating solving 2. in the comments as it seems theoretical possible if user uses volatile-ttl (... but it's not the world's most efficient policy, as it disregards which cache is used most frequently/recently... so maybe not worth it)

So I'm not really talking about 1. which I think might be what you had in mind with the File system comparisons here right?

@Toflar
Copy link
Contributor

Toflar commented Oct 31, 2019

Ooooh, I see!! That's indeed a tricky issue to solve. So the reason why I'm not talking about your specific implementation just yet is that I'm trying to find out the root cause of the issue and if we can maybe tackle it completely differently 😊
I question whether storing the cache tags in Redis is the correct approach.
Lookup has to be fast which is guaranteed by Redis. Tagging and invalidating may be slower imho.
So if you used the default TagAwareAdapter with Redis for the $itemsPool and a different implementation for the $tagsPool (filesystem or if it needs to be distributed maybe dbal?) you could make sure that the tags aren't purged just like that.

So before we talk about fixing this issue, do you mind claryfing why we even need the specific adapter in the first place? (I didn't deep-dive into the RedisTagAwareAdapter at all yet)

@nicolas-grekas
Copy link
Member

why we even need the specific adapter in the first place

Because it makes a single roundtrip per read, while TagAwareAdapter makes two per read (most of the time). Note that TagAwareAdapter doesn't store relations so it's much lighter in terms of memory requirements. Everything is a balance :)

@Toflar
Copy link
Contributor

Toflar commented Oct 31, 2019

I see. What I'm saying is that apparently the requirement of a tag cache entry is that it doesn't evict at all or at least not before any of the related cache items did.
And from what I understood we cannot properly ensure that this is the case without making sure the eviction policy of a whole host (apparently we cannot configure this for certain keys only?) is configured properly.
That kind of sounds like Redis doesn't meet our requirements.
I'm not saying it doesn't. I just want to make sure we don't miss a thing 🙈

@andrerom

This comment has been minimized.

@Toflar
Copy link
Contributor

Toflar commented Oct 31, 2019

Which is why I'm mainly trying to make it more clear for people when they use this adapter the wrong way here.

Okay, I understand. But then the message is confusing to me. It kind of tells me to configure the eviction policy correctly whereas maybe I'm using a completely strange setup anyway.
Let's face it, there are so many possible combinations and some work better for some use cases while others work better for other use cases.
I'm questioning if there's ever a use case where using the RedisTagAwareAdapter makes sense.
I'd love to have Symfony advocate for the best combinations for certain use cases.
Like so:

What's your use case?

  1. A website, single host, no special caching extensions available.

-> Use the FilesystemTagAwareAdapter

  1. A website, load balanced on two hosts, only Redis available.

-> Use the TagAwareAdapter with Redis for the $itemsPool and the PdoAdapter for $tagsPool.

[...]

Maybe some matrix that answers questions like "Okay, I do have memcached, Redis and a database but I'm running on a single host, which combination should I choose?".
And I wonder if in such a matrix, it would ever make sense to reply with RedisTagAwareAdapter :)

@andrerom

This comment has been minimized.

@Toflar
Copy link
Contributor

Toflar commented Oct 31, 2019

On cluster setup with 2-5ms latency to cache backend, cutting up-to half the round trips to cache backend can make a big difference on page load times..

I'm sure it has its place! If we adjusted the RedisTagAwareAdapter in such a way that it takes an optional $tagsPool just like the TagAwareAdapter, would the additional round trip still be omitted?

@andrerom

This comment has been minimized.

@Toflar
Copy link
Contributor

Toflar commented Oct 31, 2019

Aaaah! Wow that took a while to click for me, sorry! I see...meh, okay then that warning seems to be the only way to go. Personally I would refuse to save then. Most people won't see the issue in the logs anyway. :(

If you go to Amsterdam, we can discuss further there.

I am, we should meet independent from that issue anyway :P

@andrerom

This comment has been minimized.

@andrerom andrerom force-pushed the redis_throw_on_wrong_eviction_policy branch from 7d5cee5 to f39f1da Compare November 15, 2019 20:46
@andrerom andrerom closed this Nov 15, 2019
@andrerom andrerom deleted the redis_throw_on_wrong_eviction_policy branch November 15, 2019 20:50
@andrerom
Copy link
Contributor Author

Replaced by #34403.

nicolas-grekas added a commit that referenced this pull request Nov 16, 2019
…rerom)

This PR was squashed before being merged into the 4.4 branch.

Discussion
----------

[Cache] Redis Tag Aware warn on wrong eviction policy

| Q             | A
| ------------- | ---
| Branch?       | 4.4
| Bug fix?      | yes
| Deprecations? | no
| Tickets       | n.a.
| License       | MIT
| Doc PR        | n.a.

Adds validation to make sure Redis has been setup with the supported eviction policy to avoid surprises when cache suddenly is inconsistent.

This PR replaces #34178, and instead of checking in constructor and throwing it only checks on save, warns about this and refuses to save cache as suggested on the other PR.

TODO:
- [x] ~Adapt test setups for this to set correct eviction policy~ _It already uses default noeviction_

Commits
-------

e77f6de [Cache] Redis Tag Aware warn on wrong eviction policy
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants