-
-
Notifications
You must be signed in to change notification settings - Fork 56.2k
G-API: Add support to set workload type dynamically in both OpenVINO and ONNX OVEP #27460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: 4.x
Are you sure you want to change the base?
Conversation
@AsyaPronina Friendly reminder. |
Hello Dear @fcmiron, could I please ask you to rename this PR to take into account that this change is only for OpenVINO backend? |
@asmorkalov can you retrigger the default check? |
rebuild default |
auto workload_arg = cv::gapi::getCompileArg<cv::gapi::wip::ov::WorkloadTypeRef>(compileArgs); | ||
if(workload_arg.has_value()) { | ||
m_workload = workload_arg; | ||
m_workloadId = m_workload.value().get().addListener(std::bind(&GOVExecutable::setWorkLoadType, this, std::placeholders::_1)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it is a workloadId
, actually it is a listenerId
for the workload_type
struct. By the way, will we have more than one listener to this struct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In protopipe you can have multiple streams/graphs structures so if you pass the same workload type object to compile all of them, to update the workload type for all of them to the same value, then there will be more than one listener
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need some mechanism to differ two separate WorkloadType
variables for two different OV Infer nodes created in Protopipe? Could you re-iterate with the feature requester on it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They don't need to differentiate between two different OV Infer nodes, they only need to update the workload type for the entire stream
std::unordered_map<unsigned int, Callback> listeners; | ||
unsigned int next_id = 0; | ||
public: | ||
unsigned int addListener(Callback cb) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can pass by reference here
@@ -291,6 +292,26 @@ namespace detail | |||
}; | |||
} | |||
|
|||
class WorkloadType { | |||
using Callback = std::function<void(const std::string &type)>; | |||
std::unordered_map<unsigned int, Callback> listeners; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to place private
variables at the end of the class with explicit usage of private
access modifier just for consistency with other code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am still not sure about Listener representation here. Can we somehow hide its callable and identifier in some kind of separate class? Then, we can use just std::unordered_set
in here. I guess, each object of that Listener
class can have a hash calculated basing on the callback function pointer. So, Listener
objects don't need actual id
fields, as their hashes will be their identifiers. What do you think?
This way we won't store m_workloadType
along with m_workloadListenerId
in backends, where m_workloadListenerId
will have valid value only after adding a callback to the listeners of the m_workloadType
. We will store m_workloadType
along with m_workloadListener
instead, and this m_workloadListener
will be independent from m_workloadType
algorithms.
public: | ||
unsigned int addListener(Callback cb) { | ||
unsigned int id = next_id++; | ||
listeners.emplace(id, std::move(cb)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use next_id++
right here?
} // namespace onnx | ||
} // namespace gapi | ||
namespace detail { | ||
template<> struct CompileArgTag<std::shared_ptr<cv::gapi::onnx::WorkloadTypeONNX>> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can safely use WorkloadTypeOnnxPtr
here
@@ -752,8 +752,16 @@ class Params<cv::gapi::Generic> { | |||
std::string m_tag; | |||
}; | |||
|
|||
class WorkloadTypeONNX : public WorkloadType {}; | |||
using WorkloadTypeOnnxPtr = std::shared_ptr<cv::gapi::onnx::WorkloadTypeONNX>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we preserve this using
, suggest to have ONNX
with capitals here to be aligned with the type name above
auto workload_arg = cv::gapi::getCompileArg<cv::gapi::wip::ov::WorkloadTypeRef>(compileArgs); | ||
if(workload_arg.has_value()) { | ||
m_workload = workload_arg; | ||
m_workloadId = m_workload.value().get().addListener(std::bind(&GOVExecutable::setWorkLoadType, this, std::placeholders::_1)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need some mechanism to differ two separate WorkloadType
variables for two different OV Infer nodes created in Protopipe? Could you re-iterate with the feature requester on it?
@@ -291,6 +292,26 @@ namespace detail | |||
}; | |||
} | |||
|
|||
class WorkloadType { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that gcommon.hpp
is a right place to put WorkloadType
in. This file is far more general for such kind of compile arguments, as graph_dump_path
and use_threaded_executor
, mentioned here are actually backend-independent.
Might be some additional file can be placed here: https://github.com/opencv/opencv/tree/4.x/modules/gapi/include/opencv2/gapi/infer. Something like workload_type.hpp?
public: | ||
explicit ONNXCompiled(const gapi::onnx::detail::ParamDesc &pp); | ||
~ONNXCompiled(); | ||
void configureWorkloadType(cv::gapi::onnx::WorkloadTypeOnnxPtr workload); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be we can rename it to listenToWorkloadType()
, if it will not change ONNX workload type state after all the changes.
compiled.compiled_model.set_property({{"WORKLOAD_TYPE", ::ov::WorkloadType::EFFICIENT}}); | ||
} | ||
else { | ||
GAPI_LOG_WARNING(NULL, "Unknown value for WORKLOAD_TYPE"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be warning or an exception?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think warning is enough because if the value is not valid, the workload type will not be updated but it will still work
@@ -1541,7 +1541,13 @@ cv::gimpl::ov::GOVExecutable::GOVExecutable(const ade::Graph &g, | |||
const cv::GCompileArgs &compileArgs, | |||
const std::vector<ade::NodeHandle> &nodes) | |||
: m_g(g), m_gm(m_g) { | |||
|
|||
#if defined HAVE_INF_ENGINE && INF_ENGINE_RELEASE >= 2024030000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we have HAVE_INF_ENGINE
guard already, I suggest to use only the second one
ok |
@AsyaPronina What is the PR status? |
Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
Patch to opencv_extra has the same branch name.