Allow --retries parameter for tests #6183
Replies: 17 comments
-
To me this leads to bad testing design. You should not design a test that fail for an unknown reason, you should be able to debug them and discover why they fail. May be they hit an unknown edge case or bug in your code. If you need an external service (db, files, webservice...) that it's unreliable, mock it. |
Beta Was this translation helpful? Give feedback.
-
I agree with the sentiment above but also admit that we've implemented a similar feature above for our Codeception (acceptance/WebDriver) test platform. Tests run against an enterprise web app whose defects are beyond our control. It's an unfortunate reality, but rerunning failures does help to identify consistent issues from flakes. The challenge in my opinion is implementing a solution within Codeception that meets the needs and expectations of different users--do you record failure if all or any fail, do you ignore leading failures, etc. @dardrone, see also \Codeception\Extension\RunFailed. You might be able to achieve your desired behavior with this and some shell scripting. |
Beta Was this translation helpful? Give feedback.
-
This feature can be useful for webdriver testing but I don't know how to make it consistent with unit and integration tests. Probably So yep, this can be implemented |
Beta Was this translation helpful? Give feedback.
-
I will use this for acceptance testing and api testing. Sometimes there are curl error because the network is gone, it will be nice the morning to be sure that the error is a real error and not only a temporary environment break. |
Beta Was this translation helpful? Give feedback.
-
A problem with this suggestion is that it would make flaky tests harder to spot. |
Beta Was this translation helpful? Give feedback.
-
@Naktibalda @raistlin I can't think of a good reason for unit tests to retry, but I've encountered a few scenarios where retry is beneficial, such as automated tests 'waking' the server on startup and then failing because the waitForElementVisible was timing out due to networking issues. Still cause for concern! It would still be good to know that some tests have to retry more than others, and see those as warnings as part of this enhancement. |
Beta Was this translation helpful? Give feedback.
-
Agree with @dardrone |
Beta Was this translation helpful? Give feedback.
-
@dardrone @eXorus what would be more useful - declare retries in cli:
inside suite.yml config
or inside a specific tests with annotations Probably the last one may be the most efficient as it will allow to retry only really fragile tests and not to have all tests to be executed 5x times longer than they would have to. If we stick to last option that would allow to make this feature implemented similarly to Examples. |
Beta Was this translation helpful? Give feedback.
-
@DavertMik I prefer one of the first two options since I usually cannot predict which test will fail. It also allows you to see if a test repeatedly starts taking several attempts (but still succeeds), to investigate the root cause. My experience is that the failures can be seemingly random. And to @raistlin's point, I don't think it encourages bad tests, nor do I think it increases total testing time by much (or at least not for the testing suite with my team) because we're always aiming for reliable AND fast tests. Retries would be a big time-suck if it happened regularly. |
Beta Was this translation helpful? Give feedback.
-
@DavertMik I can use the 3 options Option 1 and 2 will be great because with acceptance tests I have always few tests failed on one hundred because timeout. Option 3 will be nice because for my acceptance tests on Internet Explorer I have sometimes the tests are stuck because Modal dialog "A script on this page is causing Internet Explorer to run slowly" |
Beta Was this translation helpful? Give feedback.
-
Couldn't agree more on this one, retrying a failed test would save us a lot of time waiting for a re-test. We currently get 1 out of 5 failed tests caused of various reasons (network mostly). And there isn't much we can do as we run on saucelabs & testingbot with he same results. So our simple tests which take around 15min (which is quite long when waiting for it to finish) will take 30-45min depending on the all might RNG king. |
Beta Was this translation helpful? Give feedback.
-
You could chain |
Beta Was this translation helpful? Give feedback.
-
@Naktibalda is that something you can actually do? or just a suggestion? doesn't seem to work for me. |
Beta Was this translation helpful? Give feedback.
-
I was looking for situation for the same problem , I agree for two point : first that the test should be more efficient to find errors , However also codeception make a lot of false positive error like click button which is exist or something else. I found kind of complicated solution to this problem its need to populate the framework itself, |
Beta Was this translation helpful? Give feedback.
-
@DavertMik |
Beta Was this translation helpful? Give feedback.
-
I am also looking for this 👍 |
Beta Was this translation helpful? Give feedback.
-
This can be done using the command refer to the documentation below for more info: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Sometimes acceptance tests fail for unknown reasons, and would be nice to have it retry only the failed test scenario up to
n
number of times specified in the argument list, similar to how nightwatchjs does it.http://nightwatchjs.org/guide
Example command:
Beta Was this translation helpful? Give feedback.
All reactions