-
Notifications
You must be signed in to change notification settings - Fork 888
feat(scaletest): create automated pprof dumps during scaletest #9887
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
--cleanup-job-timeout 15m \ | ||
--cleanup-timeout 2h | | ||
--cleanup-job-timeout 2h \ | ||
--cleanup-timeout 5h | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I'm curious, should it be a little shorter than workspace_pod_termination_grace_period_seconds
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do it the other way, we add 1-2m to workspace_pod_termination_grace_period_seconds
in the template instead. 😅
But we should probably parameterize these in the future.
@@ -23,22 +23,56 @@ fi | |||
|
|||
annotate_grafana "workspace" "Agent running" # Ended in shutdown.sh. | |||
|
|||
{ | |||
pids=() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
potential follow-up: this would be insanely cool to have as a standalone script
@@ -19,6 +19,9 @@ SCALETEST_STATE_DIR="${SCALETEST_RUN_DIR}/state" | |||
SCALETEST_PHASE_FILE="${SCALETEST_STATE_DIR}/phase" | |||
# shellcheck disable=SC2034 | |||
SCALETEST_RESULTS_DIR="${SCALETEST_RUN_DIR}/results" | |||
SCALETEST_PPROF_DIR="${SCALETEST_RUN_DIR}/pprof" | |||
|
|||
mkdir -p "${SCALETEST_STATE_DIR}" "${SCALETEST_RESULTS_DIR}" "${SCALETEST_PPROF_DIR}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Follow-up idea: would it make sense to push these to a cloud bucket or something for easier perusal?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been thinking about it, and we could definitely do that. Or we could push them to a separate workspace that can serve them in some nifty way. For instance, if we did take prometheus snapshots, this workspace could run a prom instance with that data, etc.
No description provided.