-
Notifications
You must be signed in to change notification settings - Fork 886
chore: enhance tests for TemplateSchedulePage #9801
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I did end up doing, though, was adding a little bit more logic to the tests so that hopefully, if it does fail again, we'll get slightly more specific information.
Do you mean extra assertions?
const confirmButton = await screen.findByTestId("confirm-button"); | ||
await user.click(confirmButton); | ||
}; | ||
|
||
function waitForWithCutoff(callback: () => void | Promise<void>) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if I get the idea behind waitForWithCutoff
. In your setup it could be 13sec, but for somebody else, it might be 17sec. I mean, we shouldn't create more time-dependent tests unless there is no way to prevent an event.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, that's fair. I keep forgetting that I'm on a Macbook, so the performance is going to be a lot faster. I also agree with you – ideally we shouldn't be introducing time-dependent tests unnecessarily
My worry is that with my understanding of the base waitFor
, it doesn't fail without some kind of configuration object. As in, it will try to make its assertion periodically, but if the assertion fails, it will just go back to waiting – that failed assertion will just get caught and discarded. So if the assertion doesn't succeed before Jest's global cutoff time runs out, we just get the generic timeout message, instead of something more specific. With some kind of timeout in place, waitFor
will eventually stop swallowing the failed assertion and throw it to the test suite
I have two other ideas, though
- Export the global timeout value as a constant, and define the tests to have a hard cutoff of half a second before time is up
- Just try extending timeouts for the specific tests by passing a custom timeout value to the
describe
functions
I'd be leaning more towards number 1, but they both feel varying degrees of hack-y still
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I'm just biased as we have dealt a lot with increasing timeouts endlessly on the backend side to make tests pass. Usually, the clue was to refactor or split the test into 2 separate paths.
I will leave the decision to you folks! cc @BrunoQuaresma
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, there is no good answer for this 😞 since the test environment for FE tests can be "not very reliable". Maybe we could spend some time trying to figure out better test alternatives but IMO it would be a diff task.
Right. I tried going through the component, and I couldn't see any issues with how things are wired up. There weren't any effects to make the code behave strangely, and the only thing I could see being an issue is that the button is somehow still in a disabled state when the user click action gets made. That would prevent the mocked callback from getting called, which would make the test hang until it either succeeds, or Jest kills it after the 20-second global timeout I figured we could at least verify that the button is in a clickable state before even kicking off the awaited assertion |
Linked to #9793 – does not fully address the issue.
Ultimately, I wasn't able to figure out what was only sometimes causing the test to fail. I must have run the test 40+ times while I was kicking the tires and trying different things.
What I did end up doing, though, was adding a little bit more logic to the tests so that hopefully, if it does fail again, we'll get slightly more specific information.
Changes
waitFor
calls. The cutoff (16 seconds) is higher than the expected time it takes for the tests to run normally (13 seconds), but still below the globally-configured time (20 seconds)waitFor
actually fail instead of hang indefinitely until Jest kills it, we'll get more informationSubmit
button used in the tests isn't disabled when it's clicked