-
Notifications
You must be signed in to change notification settings - Fork 150
Conversation
When the queue has hit the maximum length don't start a timer. If a timer exists flush will cancel it.
Codecov Report
@@ Coverage Diff @@
## master #172 +/- ##
=====================================
Coverage 100% 100%
=====================================
Files 1 1
Lines 108 109 +1
=====================================
+ Hits 108 109 +1
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This LGTM! But I would love to have somebody else also confirm that using this in a lambda to verify this change does indeed fix the issues they were having.
This would also be helpful for bull queue workers. Generally it would nice if the batching feature can be disabled via a config option. |
we are seeing socket hang up errors when setting
|
was this merged? |
As @f2prateek pointed out. We needed someone to run this PR on a lambda and check if it doesn't hang. |
I think I'd closed it since it hadn't gotten merged for several years. Happy to have someone else take it over the finish line though. |
|
||
await delay(10) | ||
t.true(client.flush.calledTwice) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is not clear enough.
If I understood this correctly: even though it waited 10 seconds it didn't flush a third time because it had already flushed when it reached flushAt.
Maybe the title could be enqueue - prevent flushing through time interval when already flushed by flushAt
Co-authored-by: Patrick Bassut <patrickbassut@hotmail.com>
Resolves #121
When the queue has hit the maximum length don't start a timer. If a timer exists flush will cancel it. Timers in node are part of the "event loop" as long as work is in the event loop the node process won't exit, this has impacts to things like AWS Lambda where you're effectively spinning up and shutting down runtimes very quickly.