-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Fix kubernetes dev run script #12229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
S3 Image Test Results (AMD64 / ARM64) 2 files 2 suites 8m 33s ⏱️ Results for commit a7be55e. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change looks great, thanks!
) | ||
|
||
# entrypoints | ||
if mount_entrypoints: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we log a warning that this is currently not really implemented, and may provide confusing results if the user expects this to work
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, note for the future: we implement this in localstack.dev.run
so we can take the implementation from there.
See localstack.dev.run.configurators._list_files_in_container_image
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will add some logging. The implementation can be used, the main issue is that the image in docker is not necessarily the same image in k3d, as it has a separate image store, and could pull the image (or just have a different version pulled). We would need to check the image version inside k3d, if the cluster is already running, or if not, we could check the latest one, but in general it is not guaranteed.
Motivation
With recent refactorings to namespace packaging (#11190 ) and usage of setuptools_scm (#11355) the kubernetes dev run script broke.
It still mounted pro source code into
site-packages/localstack-ext
and not inside thelocalstack
namespace package, and the entrypoint mounts did not take into account the new versioning scheme.A main culprit was the implicit behavior of taking parts of the path from the host and using them inside the container, as this prevented mounting the pro source code in the
localstack/pro
directory.Changing this required some refactoring.
Changes
--mount-entrypoints
flag, to manually opt in to mounting the entrypoints. I did not yet implement a version detection to mount the entrypoints into the right.egg-info
directory, so this has to be manually adjusted currently for the versions inside the docker image, if used.CONTAINER_RUNTIME=kubernetes
andlambda.executor=kubernetes
config to the value overrides if pro is enabledlocalstack-core
differs in community from pro)Testing
Run
python -m localstack.dev.kubernetes --pro --write --output-dir=<config-dir>
and follow the instructions.