-
Notifications
You must be signed in to change notification settings - Fork 669
test: update integration tests to run using Gitlab 16 #2790
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
TimKnight-DWP
commented
Feb 8, 2024
- Updates tests to use Gitlab 16 image
- Updates test to handle changes to the underlying APIs
- Adds programmatic date times to Access Token tests
2e823b1
to
2f67725
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for working on this @TimKnight-DWP!
I think this is based on some initial work by @JohnVillalovos so maybe cherry-picking or adding a co-authored-by trailer to include him might be good here. 👍
Also I started something related to bulk imports in #2494, so if you think you're close to getting this done, just mark the import/export tests as xfail and we can tackle them separately as a follow-up, just to unblock this upgrade. Let me know if that works!
@nejch - something has clearly changed in user_agent which was causing failures when we did snipper.user_agent_details() I can't see exactly what value that has in our tests, I will add one as a separate test marked as skipped - as I can't see anything in the changelogs as to what changed |
1886c0e
to
ce1ae1f
Compare
@TimKnight-DWP if it only affects an individual test feel free to also skip it and we deal with it separately. GitLab often has subtle breaking changes in the API that are not announced - hence also some of the failures here. |
Have done, separated it out into a distinct test so the feature can be validated when there's chance to come bacl. I also see a fgew open MRs around which look related to some of the remaining API failures |
@TimKnight-DWP also if we are religiously asserting on something that is maybe not so relevant, we could change the tests a bit to really only check what matters. Especially with deleting resources, maybe we just test something simpler if the async deletions are a pain. |
Sounds like a good shout, deletions definitely seem to be one of the biggest remaining pains, and always need a bit of time to sleep. I'm currently putting some logging in to see if there's an obvious "Scheduled for deletion" flag we can use rather than asserting thing is not there (maybe an if there -> assert scheduled for deletion, else assert has been deleted) |
I think it may also. be worth validating the response from Delete APIs such as: https://docs.gitlab.com/ee/api/users.html#user-deletion If GL returns a 204, we don't need to check that the thing has been immediately deleted, the api has told us yes, if it takes a while thats Gitlab functionality rather than python-gitlab functionaity. If my hunch is correct now, I'll do that for these failing users tests and then take a look through and swap over |
Ah that's a great idea @TimKnight-DWP. We shouldn't be testing GitLab behavior, just that it receives the right requests. 👍 we could probably do that with a lot of other tests too, now that I think of it. |
@nejch I think with this last push that should be tests Green 🤞 I'll go through this aft and update all delete tests at least to just verify the API responded 2xx (i.e. didn't throw an exception) and then squash my commits Should be good to go then 👍 |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2790 +/- ##
==========================================
- Coverage 96.52% 96.49% -0.04%
==========================================
Files 90 90
Lines 5872 5876 +4
==========================================
+ Hits 5668 5670 +2
- Misses 204 206 +2
Flags with carried forward coverage won't be shown. Click here to find out more. |
6c12089
to
1bb900f
Compare
1bb900f
to
91bfff7
Compare
@nejch - would you be able to give this another review please? I think it'll also unblock adding integration tests for the CI Job Token PR |
70f0a9c
to
d91f59d
Compare
Showing as failing because -0.04% tests 😿 |
That's ok @TimKnight-DWP I can override that (and fix the coverage in the code that is the root cause 😅). I was focusing on something else at the moment, sorry! |
d91f59d
to
57dbcdf
Compare
2c323af
to
9ddb9dc
Compare
5af057c
to
6334324
Compare
@JohnVillalovos @nejch @max-wittig @bufferoverflow - this one is good to go in and then will be able to look at getting changes to modify the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some comments and some nits.
I'm not sure I like the changes to wait_for_sidekiq
and also how it isn't called anymore in a lot of places and just replaced with sleep
. I worry that this will make the tests more likely to have random failures.
Also seems like we no longer test that delete
actually deletes what we want deleted, we are testing that we can call delete and no error is raised though.
But on the positive we now have tests working with Gitlab 16, which is great.
So overall I'm not sure. I would like to hear what @nejch thinks.
With regards to no longer calling "delete", gitlab have significantly changed how they handle deletes, it happens way more asynchronously, and thus is really hard to actually account for in the tests without making them even more brittle (in the case of some deletes it happens after 1 day regardless). So all we can do is assert we successfully called the delete, and we have to trust Gitlab will act on the DELETE Rest request, rather than also try and test their functionality |
It's been a month, but if I recall But when doing so I discovered we never got close to the limits due to threading on the container, so the check became somewhat redundant. The worker always showed as "busy" because it was waiting to perform deletes, but it had spare concurrency to perform other tasks. The sleeps could probably go away to be honest, they are mainly there to help out the stability, sometimes the container/worker are a bit slower than expected so can randomly fail a test |
@JohnVillalovos @nejch - any additional thoughts on this PR? I think it's important for us to get the tests updated to 16, but also that unlocks some later PRs to add additional, new functionality only present in 16+ |
Hi @nejch @JohnVillalovos @max-wittig @bufferoverflow - sorry to tag all of you and to nudge this along, but with Gitlab 17 now around the corner, I'd like to get this in so the testing suite isn't > 2 versions behind the release, and to unblock some of the JOB_TOKEN_SCOPE proposed PRs, as people may need that functionality when GitLab start to deprecate the existing Token logic in 17.x |
I'm traveling at the moment, so don't have time to do the review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks again @TimKnight-DWP for all the work and patience. I have a few more last comments and then I'd say we can get this merged just to unblock thins, and improve on the hardcoded sleeps later.
If you could also reword the commit messages to only say chore
/ test
as this is not really user-facing changes that'd be great, but if it's too much hassle we can just squash the commits here in the end.
@JohnVillalovos I'm also not sure how I feel about removing all the deletes, but in the end it won't be very reliable with current GitLab versions if we keep them. We might lose some breaking changes in the API this way, but this would be an upstream issue. As for So I'd say lets try and get this done finally 😅 we can still open follow-up issues to improve on it. |
7bc8546
to
8ebfe1a
Compare
@nejch updated with your comments, but something odd seems to be happening with the github-actions, and sudden unexpected failures in the smoke tests |
Thanks @TimKnight-DWP! The new smoke test failures seem unrelated as they also appear in #2833, might be something new in the build dependencies that change how the package is built, will need to check. |
b45f3fb
to
0100ee1
Compare
The culprit seems to be |
- use programmatic dates for expires_at in tokens tests - set PAT for 16.8 into tests Signed-off-by: Tim Knight <tim.knight1@engineering.digital.dwp.gov.uk>
Newer versions of GitLab will refuse to create a user with a weak password. In order for us to move to a newer GitLab version in testing use a stronger password for the tests that create a user.
- Make sure we're testing python-gitlab functionality, make sure we're not awaiting on Gitlab Async functions - Decouple and improve test stability Signed-off-by: Tim Knight <tim.knight1@engineering.digital.dwp.gov.uk>
Signed-off-by: Tim Knight <tim.knight1@engineering.digital.dwp.gov.uk>
Signed-off-by: Tim Knight <tim.knight1@engineering.digital.dwp.gov.uk>
Signed-off-by: Tim Knight <tim.knight1@engineering.digital.dwp.gov.uk>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks again @TimKnight-DWP and @JohnVillalovos - I'll keep the commits as I see you've amended the messages and it's a pretty big PR 👍
Let's see if we can improve the sleep()
calls in a follow-up. And thank for your patience 😅