Skip to content

vulkan: increase timeout for CI #14574

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 8, 2025
Merged

vulkan: increase timeout for CI #14574

merged 1 commit into from
Jul 8, 2025

Conversation

jeffbolznv
Copy link
Collaborator

Fixes #14569.

This has gotten slower recently with the additional flash attention variants. I think the crash-on-exit workaround will be available relatively soon, after that we should consider switching the Vulkan testing back to GPU.

@jeffbolznv jeffbolznv requested a review from 0cc4m July 7, 2025 22:33
@github-actions github-actions bot added the devops improvements to build systems and github actions label Jul 7, 2025
@0cc4m 0cc4m merged commit 53903ae into ggml-org:master Jul 8, 2025
44 checks passed
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Jul 8, 2025
* origin/master:
model : fix hunyuan moe chat template (ggml-org#14584)
model : add SmolLM3 (ggml-org#14581)
memory : fix broken batch splits for recurrent cache (ggml-org#14575)
vulkan : fix rope with partial rotation and non-cont src (ggml-org#14582)
server: Add ability to mount server at prefix (ggml-org#14544)
model : add hunyuan moe (ggml-org#14425)
vulkan: increase timeout for CI (ggml-org#14574)
cuda : fix rope with partial rotation and non-cont src (ggml-org#14580)
CUDA: add bilinear interpolation for upscale (ggml-org#14563)
musa: fix build warnings (unused variable) (ggml-org#14561)
llama : fix incorrect minicpm3 v_states shape (ggml-org#14571)
llama : remove ggml_cont where possible (ggml-org#14568)
qnixsynapse pushed a commit to menloresearch/llama.cpp that referenced this pull request Jul 10, 2025
qnixsynapse pushed a commit to menloresearch/llama.cpp that referenced this pull request Jul 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
devops improvements to build systems and github actions
Projects
None yet
Development

Successfully merging this pull request may close these issues.

CI: fix ubuntu-22-cmake-vulkan
3 participants