Skip to content

G-API: Add support to set workload type dynamically in both OpenVINO and ONNX OVEP #27460

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 20 commits into
base: 4.x
Choose a base branch
from

Conversation

fcmiron
Copy link

@fcmiron fcmiron commented Jun 19, 2025

Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

  • I agree to contribute to the project under Apache 2 License.
  • To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
  • The PR is proposed to the proper branch
  • There is a reference to the original bug report and related work
  • There is accuracy test, performance test and test data in opencv_extra repository, if applicable
    Patch to opencv_extra has the same branch name.
  • The feature is well documented and sample code can be built with the project CMake

@fcmiron fcmiron changed the title Add support to set workloadtype dynamically G-API: Add support to set workloadtype dynamically Jun 19, 2025
@fcmiron fcmiron marked this pull request as ready for review June 19, 2025 12:33
@asmorkalov asmorkalov added this to the 4.12.0 milestone Jun 19, 2025
@dmatveev dmatveev requested a review from AsyaPronina June 23, 2025 11:20
@asmorkalov
Copy link
Contributor

@AsyaPronina Friendly reminder.

@asmorkalov asmorkalov modified the milestones: 4.12.0, 4.13.0 Jun 26, 2025
@AsyaPronina
Copy link
Contributor

Hello Dear @fcmiron, could I please ask you to rename this PR to take into account that this change is only for OpenVINO backend?

@fcmiron fcmiron changed the title G-API: Add support to set workloadtype dynamically G-API: Add support to set workloadtype in OpenVINO dynamically Jun 30, 2025
@fcmiron fcmiron requested a review from AsyaPronina June 30, 2025 09:43
@fcmiron
Copy link
Author

fcmiron commented Jul 2, 2025

@asmorkalov can you retrigger the default check?

@silviuhrehoretintel
Copy link

rebuild default

@fcmiron fcmiron changed the title G-API: Add support to set workloadtype in OpenVINO dynamically G-API: Add support to set workload type dynamically in both OpenVINO and ONNX OVEP Jul 7, 2025
auto workload_arg = cv::gapi::getCompileArg<cv::gapi::wip::ov::WorkloadTypeRef>(compileArgs);
if(workload_arg.has_value()) {
m_workload = workload_arg;
m_workloadId = m_workload.value().get().addListener(std::bind(&GOVExecutable::setWorkLoadType, this, std::placeholders::_1));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it is a workloadId, actually it is a listenerId for the workload_type struct. By the way, will we have more than one listener to this struct?

Copy link
Author

@fcmiron fcmiron Jul 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In protopipe you can have multiple streams/graphs structures so if you pass the same workload type object to compile all of them, to update the workload type for all of them to the same value, then there will be more than one listener

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need some mechanism to differ two separate WorkloadType variables for two different OV Infer nodes created in Protopipe? Could you re-iterate with the feature requester on it?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They don't need to differentiate between two different OV Infer nodes, they only need to update the workload type for the entire stream

@fcmiron fcmiron requested a review from AsyaPronina July 15, 2025 13:56
std::unordered_map<unsigned int, Callback> listeners;
unsigned int next_id = 0;
public:
unsigned int addListener(Callback cb) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can pass by reference here

@@ -291,6 +292,26 @@ namespace detail
};
}

class WorkloadType {
using Callback = std::function<void(const std::string &type)>;
std::unordered_map<unsigned int, Callback> listeners;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to place private variables at the end of the class with explicit usage of private access modifier just for consistency with other code

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am still not sure about Listener representation here. Can we somehow hide its callable and identifier in some kind of separate class? Then, we can use just std::unordered_set in here. I guess, each object of that Listener class can have a hash calculated basing on the callback function pointer. So, Listener objects don't need actual id fields, as their hashes will be their identifiers. What do you think?

This way we won't store m_workloadType along with m_workloadListenerId in backends, where m_workloadListenerId will have valid value only after adding a callback to the listeners of the m_workloadType. We will store m_workloadType along with m_workloadListener instead, and this m_workloadListener will be independent from m_workloadType algorithms.

public:
unsigned int addListener(Callback cb) {
unsigned int id = next_id++;
listeners.emplace(id, std::move(cb));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use next_id++ right here?

} // namespace onnx
} // namespace gapi
namespace detail {
template<> struct CompileArgTag<std::shared_ptr<cv::gapi::onnx::WorkloadTypeONNX>> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can safely use WorkloadTypeOnnxPtr here

@@ -752,8 +752,16 @@ class Params<cv::gapi::Generic> {
std::string m_tag;
};

class WorkloadTypeONNX : public WorkloadType {};
using WorkloadTypeOnnxPtr = std::shared_ptr<cv::gapi::onnx::WorkloadTypeONNX>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we preserve this using, suggest to have ONNX with capitals here to be aligned with the type name above

auto workload_arg = cv::gapi::getCompileArg<cv::gapi::wip::ov::WorkloadTypeRef>(compileArgs);
if(workload_arg.has_value()) {
m_workload = workload_arg;
m_workloadId = m_workload.value().get().addListener(std::bind(&GOVExecutable::setWorkLoadType, this, std::placeholders::_1));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need some mechanism to differ two separate WorkloadType variables for two different OV Infer nodes created in Protopipe? Could you re-iterate with the feature requester on it?

@@ -291,6 +292,26 @@ namespace detail
};
}

class WorkloadType {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that gcommon.hpp is a right place to put WorkloadType in. This file is far more general for such kind of compile arguments, as graph_dump_path and use_threaded_executor, mentioned here are actually backend-independent.

Might be some additional file can be placed here: https://github.com/opencv/opencv/tree/4.x/modules/gapi/include/opencv2/gapi/infer. Something like workload_type.hpp?

public:
explicit ONNXCompiled(const gapi::onnx::detail::ParamDesc &pp);
~ONNXCompiled();
void configureWorkloadType(cv::gapi::onnx::WorkloadTypeOnnxPtr workload);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be we can rename it to listenToWorkloadType(), if it will not change ONNX workload type state after all the changes.

compiled.compiled_model.set_property({{"WORKLOAD_TYPE", ::ov::WorkloadType::EFFICIENT}});
}
else {
GAPI_LOG_WARNING(NULL, "Unknown value for WORKLOAD_TYPE");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be warning or an exception?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think warning is enough because if the value is not valid, the workload type will not be updated but it will still work

@@ -1541,7 +1541,13 @@ cv::gimpl::ov::GOVExecutable::GOVExecutable(const ade::Graph &g,
const cv::GCompileArgs &compileArgs,
const std::vector<ade::NodeHandle> &nodes)
: m_g(g), m_gm(m_g) {

#if defined HAVE_INF_ENGINE && INF_ENGINE_RELEASE >= 2024030000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we have HAVE_INF_ENGINE guard already, I suggest to use only the second one

@fcmiron fcmiron requested a review from AsyaPronina July 21, 2025 13:03
@fh9621063
Copy link

Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

  • I agree to contribute to the project under Apache 2 License.
  • To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
  • The PR is proposed to the proper branch
  • There is a reference to the original bug report and related work
  • There is accuracy test, performance test and test data in opencv_extra repository, if applicable
    Patch to opencv_extra has the same branch name.
  • The feature is well documented and sample code can be built with the project CMake

ok

@asmorkalov
Copy link
Contributor

@AsyaPronina What is the PR status?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants