Skip to content

test: update integration tests to run using Gitlab 16 #2790

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Apr 25, 2024

Conversation

TimKnight-DWP
Copy link
Contributor

  • Updates tests to use Gitlab 16 image
  • Updates test to handle changes to the underlying APIs
  • Adds programmatic date times to Access Token tests

Copy link
Member

@nejch nejch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for working on this @TimKnight-DWP!

I think this is based on some initial work by @JohnVillalovos so maybe cherry-picking or adding a co-authored-by trailer to include him might be good here. 👍

Also I started something related to bulk imports in #2494, so if you think you're close to getting this done, just mark the import/export tests as xfail and we can tackle them separately as a follow-up, just to unblock this upgrade. Let me know if that works!

@TimKnight-DWP
Copy link
Contributor Author

@nejch - something has clearly changed in user_agent which was causing failures when we did snipper.user_agent_details()

I can't see exactly what value that has in our tests, I will add one as a separate test marked as skipped - as I can't see anything in the changelogs as to what changed

@nejch
Copy link
Member

nejch commented Feb 12, 2024

@TimKnight-DWP if it only affects an individual test feel free to also skip it and we deal with it separately. GitLab often has subtle breaking changes in the API that are not announced - hence also some of the failures here.

@TimKnight-DWP
Copy link
Contributor Author

Have done, separated it out into a distinct test so the feature can be validated when there's chance to come bacl. I also see a fgew open MRs around which look related to some of the remaining API failures

@nejch
Copy link
Member

nejch commented Feb 13, 2024

@TimKnight-DWP also if we are religiously asserting on something that is maybe not so relevant, we could change the tests a bit to really only check what matters. Especially with deleting resources, maybe we just test something simpler if the async deletions are a pain.

@TimKnight-DWP
Copy link
Contributor Author

Sounds like a good shout, deletions definitely seem to be one of the biggest remaining pains, and always need a bit of time to sleep.

I'm currently putting some logging in to see if there's an obvious "Scheduled for deletion" flag we can use rather than asserting thing is not there (maybe an if there -> assert scheduled for deletion, else assert has been deleted)

@TimKnight-DWP
Copy link
Contributor Author

TimKnight-DWP commented Feb 13, 2024

I think it may also. be worth validating the response from Delete APIs such as: https://docs.gitlab.com/ee/api/users.html#user-deletion

If GL returns a 204, we don't need to check that the thing has been immediately deleted, the api has told us yes, if it takes a while thats Gitlab functionality rather than python-gitlab functionaity.

If my hunch is correct now, I'll do that for these failing users tests and then take a look through and swap over

@nejch
Copy link
Member

nejch commented Feb 13, 2024

I think it may also. be worth validating the response from Delete APIs such as: https://docs.gitlab.com/ee/api/users.html#user-deletion

If GL returns a 204, we don't need to check that the thing has been immediately deleted, the api has told us yes, if it takes a while thats Gitlab functionality rather than python-gitlab functionaity.

If my hunch is correct now, I'll do that for these failing users tests and then take a look through and swap over

Ah that's a great idea @TimKnight-DWP. We shouldn't be testing GitLab behavior, just that it receives the right requests. 👍 we could probably do that with a lot of other tests too, now that I think of it.

@TimKnight-DWP
Copy link
Contributor Author

@nejch I think with this last push that should be tests Green 🤞

I'll go through this aft and update all delete tests at least to just verify the API responded 2xx (i.e. didn't throw an exception) and then squash my commits

Should be good to go then 👍

Copy link

codecov bot commented Feb 13, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 96.49%. Comparing base (7ec3189) to head (b45f3fb).
Report is 26 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2790      +/-   ##
==========================================
- Coverage   96.52%   96.49%   -0.04%     
==========================================
  Files          90       90              
  Lines        5872     5876       +4     
==========================================
+ Hits         5668     5670       +2     
- Misses        204      206       +2     
Flag Coverage Δ
api_func_v4 82.30% <ø> (+0.06%) ⬆️
cli_func_v4 83.56% <ø> (-0.03%) ⬇️
unit 88.27% <ø> (-0.03%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

see 1 file with indirect coverage changes

@TimKnight-DWP TimKnight-DWP marked this pull request as ready for review February 13, 2024 14:00
@TimKnight-DWP TimKnight-DWP force-pushed the update-tests branch 2 times, most recently from 6c12089 to 1bb900f Compare February 13, 2024 14:03
@TimKnight-DWP
Copy link
Contributor Author

@nejch - would you be able to give this another review please? I think it'll also unblock adding integration tests for the CI Job Token PR

@TimKnight-DWP TimKnight-DWP force-pushed the update-tests branch 2 times, most recently from 70f0a9c to d91f59d Compare February 28, 2024 15:52
@TimKnight-DWP
Copy link
Contributor Author

Showing as failing because -0.04% tests 😿

@nejch
Copy link
Member

nejch commented Feb 28, 2024

Showing as failing because -0.04% tests 😿

That's ok @TimKnight-DWP I can override that (and fix the coverage in the code that is the root cause 😅). I was focusing on something else at the moment, sorry!

@TimKnight-DWP
Copy link
Contributor Author

@JohnVillalovos @nejch @max-wittig @bufferoverflow - this one is good to go in and then will be able to look at getting changes to modify the job_token_scope via python gitlab in, as GL have now released the APIs in a main build 👍

Copy link
Member

@JohnVillalovos JohnVillalovos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments and some nits.

I'm not sure I like the changes to wait_for_sidekiq and also how it isn't called anymore in a lot of places and just replaced with sleep. I worry that this will make the tests more likely to have random failures.

Also seems like we no longer test that delete actually deletes what we want deleted, we are testing that we can call delete and no error is raised though.

But on the positive we now have tests working with Gitlab 16, which is great.

So overall I'm not sure. I would like to hear what @nejch thinks.

@TimKnight-DWP
Copy link
Contributor Author

With regards to no longer calling "delete", gitlab have significantly changed how they handle deletes, it happens way more asynchronously, and thus is really hard to actually account for in the tests without making them even more brittle (in the case of some deletes it happens after 1 day regardless). So all we can do is assert we successfully called the delete, and we have to trust Gitlab will act on the DELETE Rest request, rather than also try and test their functionality

@TimKnight-DWP
Copy link
Contributor Author

TimKnight-DWP commented Mar 28, 2024

It's been a month, but if I recall wait_for_sidekiq did a lot of checking assuming one free worker means we're good, but with the delete changes, gitlab stores those deletes in a Q, whichcan be checked by how much concurrency sidekiq has used vs total.

But when doing so I discovered we never got close to the limits due to threading on the container, so the check became somewhat redundant. The worker always showed as "busy" because it was waiting to perform deletes, but it had spare concurrency to perform other tasks.

The sleeps could probably go away to be honest, they are mainly there to help out the stability, sometimes the container/worker are a bit slower than expected so can randomly fail a test

@TimKnight-DWP
Copy link
Contributor Author

@JohnVillalovos @nejch - any additional thoughts on this PR?

I think it's important for us to get the tests updated to 16, but also that unlocks some later PRs to add additional, new functionality only present in 16+

@TimKnight-DWP
Copy link
Contributor Author

Hi @nejch @JohnVillalovos @max-wittig @bufferoverflow - sorry to tag all of you and to nudge this along, but with Gitlab 17 now around the corner, I'd like to get this in so the testing suite isn't > 2 versions behind the release, and to unblock some of the JOB_TOKEN_SCOPE proposed PRs, as people may need that functionality when GitLab start to deprecate the existing Token logic in 17.x

@nejch nejch self-assigned this Apr 23, 2024
@JohnVillalovos
Copy link
Member

I'm traveling at the moment, so don't have time to do the review.

Copy link
Member

@nejch nejch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks again @TimKnight-DWP for all the work and patience. I have a few more last comments and then I'd say we can get this merged just to unblock thins, and improve on the hardcoded sleeps later.

If you could also reword the commit messages to only say chore / test as this is not really user-facing changes that'd be great, but if it's too much hassle we can just squash the commits here in the end.

@nejch
Copy link
Member

nejch commented Apr 23, 2024

Left some comments and some nits.

I'm not sure I like the changes to wait_for_sidekiq and also how it isn't called anymore in a lot of places and just replaced with sleep. I worry that this will make the tests more likely to have random failures.

Also seems like we no longer test that delete actually deletes what we want deleted, we are testing that we can call delete and no error is raised though.

But on the positive we now have tests working with Gitlab 16, which is great.

So overall I'm not sure. I would like to hear what @nejch thinks.

@JohnVillalovos I'm also not sure how I feel about removing all the deletes, but in the end it won't be very reliable with current GitLab versions if we keep them. We might lose some breaking changes in the API this way, but this would be an upstream issue.

As for wait_for_sidekiq, I also think that was maybe not very reliable. If we find a more generic way, we can also point these tests against external gitlab instances in the future.

So I'd say lets try and get this done finally 😅 we can still open follow-up issues to improve on it.

@TimKnight-DWP TimKnight-DWP force-pushed the update-tests branch 2 times, most recently from 7bc8546 to 8ebfe1a Compare April 24, 2024 09:27
@nejch nejch changed the title fix: update integration tests to run using Gitlab 16 test: update integration tests to run using Gitlab 16 Apr 24, 2024
@TimKnight-DWP
Copy link
Contributor Author

@nejch updated with your comments, but something odd seems to be happening with the github-actions, and sudden unexpected failures in the smoke tests

@nejch
Copy link
Member

nejch commented Apr 24, 2024

Thanks @TimKnight-DWP! The new smoke test failures seem unrelated as they also appear in #2833, might be something new in the build dependencies that change how the package is built, will need to check.

@nejch
Copy link
Member

nejch commented Apr 25, 2024

Thanks @TimKnight-DWP! The new smoke test failures seem unrelated as they also appear in #2833, might be something new in the build dependencies that change how the package is built, will need to check.

The culprit seems to be pypa/build#770 pypa/setuptools#3593. I'll adapt the tests.

renovate bot and others added 8 commits April 25, 2024 11:00
- use programmatic dates for expires_at in tokens tests
- set PAT for 16.8 into tests

Signed-off-by: Tim Knight <tim.knight1@engineering.digital.dwp.gov.uk>
Newer versions of GitLab will refuse to create a user with a weak
password. In order for us to move to a newer GitLab version in testing
use a stronger password for the tests that create a user.
- Make sure we're testing python-gitlab functionality,
make sure we're not awaiting on Gitlab Async functions
- Decouple and improve test stability

Signed-off-by: Tim Knight <tim.knight1@engineering.digital.dwp.gov.uk>
Signed-off-by: Tim Knight <tim.knight1@engineering.digital.dwp.gov.uk>
Signed-off-by: Tim Knight <tim.knight1@engineering.digital.dwp.gov.uk>
Signed-off-by: Tim Knight <tim.knight1@engineering.digital.dwp.gov.uk>
Copy link
Member

@nejch nejch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks again @TimKnight-DWP and @JohnVillalovos - I'll keep the commits as I see you've amended the messages and it's a pretty big PR 👍

Let's see if we can improve the sleep() calls in a follow-up. And thank for your patience 😅

@nejch nejch enabled auto-merge (rebase) April 25, 2024 09:09
@nejch nejch merged commit 48a6705 into python-gitlab:main Apr 25, 2024
16 checks passed
@TimKnight-DWP TimKnight-DWP deleted the update-tests branch April 29, 2024 12:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants