Skip to content

Add support for VK_EXT_debug_utils to add labels to Vulkan objects. #13792

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

mtavenrath
Copy link
Contributor

Step 1: Add names to pipelines to make them easily identifiable in profiling tools.

…n step 1 compute pipelines are getting labeled.
@mtavenrath
Copy link
Contributor Author

It is possible to mark queue ranges as well. I have functionality to mark the queue per graph iteration with a common name due to the lack of named information.

Would it be possible to add annotation string to the graph structure like "token %i, pp" or maybe even adding the full algorithm name?

@github-actions github-actions bot added Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels May 26, 2025
@0cc4m
Copy link
Collaborator

0cc4m commented May 28, 2025

This looks fine to me, but I don't currently have time to test it. @jeffbolznv Can you do a review?

Also it needs to be rebased.

vk::DebugUtilsObjectNameInfoEXT duoni;
duoni.objectType = vk::ObjectType::ePipeline;
duoni.pObjectName = pipeline->name.c_str();
duoni.objectHandle = reinterpret_cast<uint64_t>(pipeline->pipeline.operator VkPipeline_T *());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might be more readable as:

Suggested change
duoni.objectHandle = reinterpret_cast<uint64_t>(pipeline->pipeline.operator VkPipeline_T *());
duoni.objectHandle = reinterpret_cast<uint64_t>((VkPipeline_T *)pipeline->pipeline);

@@ -177,6 +177,7 @@ option(GGML_VULKAN_DEBUG "ggml: enable Vulkan debug output"
option(GGML_VULKAN_MEMORY_DEBUG "ggml: enable Vulkan memory debug output" OFF)
option(GGML_VULKAN_SHADER_DEBUG_INFO "ggml: enable Vulkan shader debug info" OFF)
option(GGML_VULKAN_PERF "ggml: enable Vulkan perf output" OFF)
option(GGML_VULKAN_DEBUG_UTILS "ggml: VK_EXT_debug_utils debug information" OFF)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the ifdef really needed? At least for labeling pipelines I don't think there's any overhead. Maybe if we eventually label command buffer regions we might need to be able to enable/disable, but even then doing it by env var would be nicer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In an ideal world I'd like to see cmdbuffer and queue labeling which has more overhead. The additional statement, even per cmd, might be neglectable since the branch predictor will do a good job.

If there are no objectsions by anyone else I can remove the ifdef and make the feature controllable through an environment variable instead.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my side, as long as it doesn't have an overhead, it can be enabled without ifdefs. If it does, it should require a compile flag.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added ENV{GGML_VK_DEBUG_MARKERS} as protection with the default being off. This is sufficient cheap to not introduce any measurable overhead.

I also added queue markers with a constant name for full graph execution to make traces more readable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants