Skip to content

Short circuit value cache look up to avoid object allocations #109

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Feb 15, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 30 additions & 10 deletions src/main/java/org/dataloader/DataLoaderHelper.java
Original file line number Diff line number Diff line change
Expand Up @@ -331,22 +331,33 @@ CompletableFuture<List<V>> invokeLoader(List<K> keys, List<Object> keyContexts,
CompletableFuture<List<Try<V>>> cacheCallCF = getFromValueCache(keys);
return cacheCallCF.thenCompose(cachedValues -> {

assertState(keys.size() == cachedValues.size(), () -> "The size of the cached values MUST be the same size as the key list");

// the following is NOT a Map because keys in data loader can repeat (by design)
// and hence "a","b","c","b" is a valid set of keys
List<Try<V>> valuesInKeyOrder = new ArrayList<>();
List<Integer> missedKeyIndexes = new ArrayList<>();
List<K> missedKeys = new ArrayList<>();
List<Object> missedKeyContexts = new ArrayList<>();
for (int i = 0; i < keys.size(); i++) {
Try<V> cacheGet = cachedValues.get(i);
valuesInKeyOrder.add(cacheGet);
if (cacheGet.isFailure()) {

// if they return a ValueCachingNotSupported exception then we insert this special marker value, and it
// means it's a total miss, we need to get all these keys via the batch loader
if (cachedValues == NOT_SUPPORTED_LIST) {
for (int i = 0; i < keys.size(); i++) {
valuesInKeyOrder.add(ALWAYS_FAILED);
missedKeyIndexes.add(i);
missedKeys.add(keys.get(i));
missedKeyContexts.add(keyContexts.get(i));
}
} else {
assertState(keys.size() == cachedValues.size(), () -> "The size of the cached values MUST be the same size as the key list");
for (int i = 0; i < keys.size(); i++) {
Try<V> cacheGet = cachedValues.get(i);
valuesInKeyOrder.add(cacheGet);
if (cacheGet.isFailure()) {
missedKeyIndexes.add(i);
missedKeys.add(keys.get(i));
missedKeyContexts.add(keyContexts.get(i));
}
}
}
if (missedKeys.isEmpty()) {
//
Expand Down Expand Up @@ -442,9 +453,16 @@ int dispatchDepth() {
}
}

private final List<Try<V>> NOT_SUPPORTED_LIST = emptyList();
private final CompletableFuture<List<Try<V>>> NOT_SUPPORTED = CompletableFuture.completedFuture(NOT_SUPPORTED_LIST);
private final Try<V> ALWAYS_FAILED = Try.alwaysFailed();

private CompletableFuture<List<Try<V>>> getFromValueCache(List<K> keys) {
try {
return nonNull(valueCache.getValues(keys), () -> "Your ValueCache.getValues function MUST return a non null CompletableFuture");
} catch (ValueCache.ValueCachingNotSupported ignored) {
// use of a final field prevents CF object allocation for this special purpose
return NOT_SUPPORTED;
} catch (RuntimeException e) {
return CompletableFutureKit.failedFuture(e);
}
Expand All @@ -456,16 +474,18 @@ private CompletableFuture<List<V>> setToValueCache(List<V> assembledValues, List
if (completeValueAfterCacheSet) {
return nonNull(valueCache
.setValues(missedKeys, missedValues), () -> "Your ValueCache.setValues function MUST return a non null CompletableFuture")
// we dont trust the set cache to give us the values back - we have them - lets use them
// if the cache set fails - then they wont be in cache and maybe next time they will
// we don't trust the set cache to give us the values back - we have them - lets use them
// if the cache set fails - then they won't be in cache and maybe next time they will
.handle((ignored, setExIgnored) -> assembledValues);
} else {
// no one is waiting for the set to happen here so if its truly async
// it will happen eventually but no result will be dependant on it
// it will happen eventually but no result will be dependent on it
valueCache.setValues(missedKeys, missedValues);
}
} catch (ValueCache.ValueCachingNotSupported ignored) {
// ok no set caching is fine if they say so
} catch (RuntimeException ignored) {
// if we cant set values back into the cache - so be it - this must be a faulty
// if we can't set values back into the cache - so be it - this must be a faulty
// ValueCache implementation
}
return CompletableFuture.completedFuture(assembledValues);
Expand Down
24 changes: 21 additions & 3 deletions src/main/java/org/dataloader/ValueCache.java
Original file line number Diff line number Diff line change
Expand Up @@ -74,13 +74,18 @@ static <K, V> ValueCache<K, V> defaultValueCache() {
* a successful Try contain the cached value is returned.
* <p>
* You MUST return a List that is the same size as the keys passed in. The code will assert if you do not.
* <p>
* If your cache does not have anything in it at all, and you want to quickly short-circuit this method and avoid any object allocation
* then throw {@link ValueCachingNotSupported} and the code will know there is nothing in cache at this time.
*
* @param keys the list of keys to get cached values for.
*
* @return a future containing a list of {@link Try} cached values for each key passed in.
*
* @throws ValueCachingNotSupported if this cache wants to short-circuit this method completely
*/
default CompletableFuture<List<Try<V>>> getValues(List<K> keys) {
List<CompletableFuture<Try<V>>> cacheLookups = new ArrayList<>();
default CompletableFuture<List<Try<V>>> getValues(List<K> keys) throws ValueCachingNotSupported {
List<CompletableFuture<Try<V>>> cacheLookups = new ArrayList<>(keys.size());
for (K key : keys) {
CompletableFuture<Try<V>> cacheTry = Try.tryFuture(get(key));
cacheLookups.add(cacheTry);
Expand All @@ -106,8 +111,10 @@ default CompletableFuture<List<Try<V>>> getValues(List<K> keys) {
* @param values the values to store
*
* @return a future containing the stored values for fluent composition
*
* @throws ValueCachingNotSupported if this cache wants to short-circuit this method completely
*/
default CompletableFuture<List<V>> setValues(List<K> keys, List<V> values) {
default CompletableFuture<List<V>> setValues(List<K> keys, List<V> values) throws ValueCachingNotSupported {
List<CompletableFuture<V>> cacheSets = new ArrayList<>();
for (int i = 0; i < keys.size(); i++) {
K k = keys.get(i);
Expand Down Expand Up @@ -140,4 +147,15 @@ default CompletableFuture<List<V>> setValues(List<K> keys, List<V> values) {
* @return a void future for error handling and fluent composition
*/
CompletableFuture<Void> clear();


/**
* This special exception can be used to short-circuit a caching method
*/
class ValueCachingNotSupported extends UnsupportedOperationException {
@Override
public Throwable fillInStackTrace() {
return this;
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is magic - the exception has no stack trace (we don't want one) AND its fast to allocate because of this.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it need it to be synchronized?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No - it is in the base class. I will change this

}
}
30 changes: 25 additions & 5 deletions src/main/java/org/dataloader/impl/NoOpValueCache.java
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
package org.dataloader.impl;


import org.dataloader.Try;
import org.dataloader.ValueCache;
import org.dataloader.annotations.Internal;

import java.util.List;
import java.util.concurrent.CompletableFuture;

/**
Expand All @@ -20,37 +22,55 @@
@Internal
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe promote it to public? We expose a public static final NOOP which may be of use to library consumers

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The NOOP is a default impl for never caching. I want it to stay that way. There is no real useful code here honestly for library consumers

public class NoOpValueCache<K, V> implements ValueCache<K, V> {

public static NoOpValueCache<?, ?> NOOP = new NoOpValueCache<>();
/**
* a no op value cache instance
*/
public static final NoOpValueCache<?, ?> NOOP = new NoOpValueCache<>();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we make it public:
Introduce a private constructor? + Builder method for type safety?


// avoid object allocation by using a final field
private final ValueCachingNotSupported NOT_SUPPORTED = new ValueCachingNotSupported();
private final CompletableFuture<V> NOT_SUPPORTED_CF = CompletableFutureKit.failedFuture(NOT_SUPPORTED);
private final CompletableFuture<Void> NOT_SUPPORTED_VOID_CF = CompletableFuture.completedFuture(null);

/**
* {@inheritDoc}
*/
@Override
public CompletableFuture<V> get(K key) {
return CompletableFutureKit.failedFuture(new UnsupportedOperationException());
return NOT_SUPPORTED_CF;
}

@Override
public CompletableFuture<List<Try<V>>> getValues(List<K> keys) throws ValueCachingNotSupported {
throw NOT_SUPPORTED;
}

/**
* {@inheritDoc}
*/
@Override
public CompletableFuture<V> set(K key, V value) {
return CompletableFuture.completedFuture(value);
return NOT_SUPPORTED_CF;
}

@Override
public CompletableFuture<List<V>> setValues(List<K> keys, List<V> values) throws ValueCachingNotSupported {
throw NOT_SUPPORTED;
}

/**
* {@inheritDoc}
*/
@Override
public CompletableFuture<Void> delete(K key) {
return CompletableFuture.completedFuture(null);
return NOT_SUPPORTED_VOID_CF;
}

/**
* {@inheritDoc}
*/
@Override
public CompletableFuture<Void> clear() {
return CompletableFuture.completedFuture(null);
return NOT_SUPPORTED_VOID_CF;
}
}