Skip to content

G-API: Implement concurrent executor #24845

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 22 commits into from
Jan 30, 2024

Conversation

TolyaTalamanov
Copy link
Contributor

@TolyaTalamanov TolyaTalamanov commented Jan 10, 2024

Overview

This PR introduces the new G-API executor called GThreadedExecutor which can be selected when the GComputation is compiled in serial mode (a.k.a GComputation::compile(...))

ThreadPool

cv::gapi::own::ThreadPool has been introduced in order to abstract usage of threads in GThreadedExecutor.
ThreadPool is implemented by using own::concurrent_bounded_queue

ThreadPool has only as single method schedule that will push task into the queue for the further execution.
The important notice is that if Task executed in ThreadPool throws exception - this is UB.

GThreadedExecutor

The GThreadedExecutor is mostly copy-paste of GExecutor, should we extend GExecutor instead?

Implementation details

  1. Build the dependency graph for Island nodes.
  2. Store the tasks that don't have dependencies into separate vector in order to run them first.
  3. at the GThreadedExecutor::run() schedule the tasks that don't have dependencies that will schedule their dependents and wait for the completion.

Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

  • I agree to contribute to the project under Apache 2 License.
  • To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
  • The PR is proposed to the proper branch
  • There is a reference to the original bug report and related work
  • There is accuracy test, performance test and test data in opencv_extra repository, if applicable
    Patch to opencv_extra has the same branch name.
  • The feature is well documented and sample code can be built with the project CMake

@TolyaTalamanov TolyaTalamanov marked this pull request as draft January 10, 2024 15:31
@@ -513,4 +515,23 @@ TEST(GAPI_Pipeline, 1DMatWithinSingleIsland)
EXPECT_EQ(0, cv::norm(out_mat, ref_mat));
}

TEST(GAPI_Pipeline, NewExecutorTest)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was a temp test, need to remove

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


#include "thread_pool.hpp"

#include <iostream>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


void cv::gapi::own::WaitGroup::wait() {
while (task_counter.load() != 0u) {
std::unique_lock<std::mutex> lk{m};
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely shouldn't be in loop...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

@asmorkalov asmorkalov requested a review from dmatveev January 12, 2024 06:17
@TolyaTalamanov TolyaTalamanov added this to the 4.10.0 milestone Jan 12, 2024
@TolyaTalamanov TolyaTalamanov marked this pull request as ready for review January 12, 2024 10:17
namespace magazine {
namespace {

void bindInArgExec(Mag& mag, const RcDesc &rc, const GRunArg &arg)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part is pure copy-paste from gexecutor.cpp, perhaps it should be somewhere in common (e.g gcommon.hpp)

}
}

void assignMetaStubExec(Mag& mag, const RcDesc &rc, const cv::GRunArg::Meta &meta) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dmatveev This is the place where meta is stored into Mag. I assume there might be some data race that's why Mag is currently protected by mutex.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there won't be a race if we statically preallocate the magazine for all relevant [rc.id]s in all relevant types, but the mutex is fine and we can do that later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left mutex so far


virtual StreamMsg get() override
{
std::lock_guard<std::mutex> lock{m_mutex};
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing lock causes the data race even though magazine::getArg is supposed to be a read operation

void meta(const GRunArgP &out, const GRunArg::Meta &m) override
{
const auto idx = out_idx.at(cv::gimpl::proto::ptr(out));
std::lock_guard<std::mutex> lock{m_mutex};
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume it's impossible that two Island's write to the same memory in parallel but it caused data race if mutex isn't used


void run();
void verify();
std::shared_ptr<GIslandExecutable> exec() { return m_isl_exec; }
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

exec() is redundant for IslandActor, in fact GIslandExecutable::Ptr can be stored & accessed in GThreadedExecutor and passed to IslandActor by pointer/reference.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please, leave a comment in code

#include <opencv2/gapi/util/throw.hpp>

void cv::gapi::own::WaitGroup::add() {
std::lock_guard<std::mutex> lk{m};
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fact it's not necessarily to acquire the mutex on add() method, task_counter might be an atomic.

But since we still need the mutex for wait (as it's uses condvar) atomic + mutex seems the weird combination and didn't give any significant performance improvements. Decided to keep it simple so far.

If fact WaitGroup can be implemented without condvar and mutex by using 2 atomics. But it requires c++20 for this thing: https://en.cppreference.com/w/cpp/atomic/atomic/wait

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WaitGroup has been removed

}
}

void cv::gapi::own::ThreadPool::schedule(cv::gapi::own::ThreadPool::Task task) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ThreadSanitizer also signals problems when schedule is called from different threads but it seems to be fine since both WaitGroup::add() and Queue::push are supposed to be thread-safe

void cv::gapi::own::ThreadPool::stop() {
wait();
for (uint32_t i = 0; i < m_num_workers; ++i) {
m_queue.push({});
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Empty task - termination criterion for the worker, at least worth a comment...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


#include <opencv2/gapi/own/exports.hpp> // GAPI_EXPORTS

#if defined(HAVE_TBB)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yet another copy-paste of this thing

@dmatveev dmatveev self-assigned this Jan 12, 2024
*
* Specifies a number of threads that should be used by executor.
*/
struct use_threaded_executor
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be exposed into python later

*/
struct use_threaded_executor
{
explicit use_threaded_executor(uint32_t nthreads = std::thread::hardware_concurrency())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...but normally it is needed to respond/handle to the setNumThreads() which sets # of threads to OpenCV globally (as it is still a set of standalone functions)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I think we can use cv::getNumThreads() there instead. Done

src/executor/gstreamingexecutor.cpp
src/executor/gasync.cpp
src/executor/thread_pool.cpp
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it a third thread pool in this only module? :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess you refer to RequestPool's from ie and ov backends.

switch (arg.index())
{
case GRunArg::index_of<Mat>() :
mag_rmat = make_rmat<RMatOnMat>(util::get<Mat>(arg)); break;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

normally we have break in the following line

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(or the whole case ...: ...; break in a single line)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's just a copy-paste from gexecutor.cpp but no problem to address it :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

}
}

void assignMetaStubExec(Mag& mag, const RcDesc &rc, const cv::GRunArg::Meta &meta) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there won't be a race if we statically preallocate the magazine for all relevant [rc.id]s in all relevant types, but the mutex is fine and we can do that later.

Comment on lines 129 to 130
cv::gimpl::Mag &mag;
std::mutex &m_mutex;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it is an abstraction leak to expose 1) a data structure and 2) its protection mutex by a reference to some external context like this.

If the goal of this whole stub is to populate an input vector for an island, you can query the object who owns those two (a magazine and a mutex) to do that for you. Then this thread safety concern falls to that context. So what you could do instead is to have a GraphState structure - representing a running graph - and host the magazine in.

The pro of this way is that later you may have a single executor serving multiple executable graphs - to their state will be clearly separated one from another.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

}
}

static thread_local cv::gapi::own::ThreadPool* current;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't look good tbh :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quite handy though, user doesn't need to keep ThreadPool for scheduling nested tasks.

static void foo() {
    // do something
    ThreadPool::get()->schedule(bar);
};

static void bar() {
     // do something
     ThreadPool::get()->schedule(baz);
}

...
ThreadPool tp;
tp.schedule(foo);

But if we have plan to make ThreadPool available for all GCompiled's, it can be removed for now

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

}

void cv::gapi::own::ThreadPool::worker() {
current = this;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do you need current at all?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Answered above

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

}

void cv::gapi::own::ThreadPool::schedule(cv::gapi::own::ThreadPool::Task task) {
m_wg.add();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so you increase you wait count once you schedule a task, and decrease once you're done with it -- why?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The entire WaitGroup are used to implement ThreadPool::wait() that is supposed to block until all scheduled tasks, including nested ones, are completed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not relevant, WaitGroup has been removed

cv::gapi::own::ThreadPool::Task task;
m_queue.pop(task);
if (!task) {
break;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean you terminate your worked threads once the graph is done?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The answer is NO, workers are terminated in the ThreadPool::stop that called during GCompiled destruction.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not it's done in shutdown method which is private and called only in ~ThreadPool()

}
}

void cv::gapi::own::ThreadPool::schedule(cv::gapi::own::ThreadPool::Task task) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be &&task ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

}

void cv::gimpl::IslandActor::verify() {
if (m_e) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we get a simultaneous read + write situation for the exception? Shouldn't it be protected (e.g. atomic)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IslandActor itself doesn't give any thread-safe guarantees, besides this method is called only in main thread when all tasks are completed.

break;
}
task();
m_wg.done();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we care if this line can be unreachable due to hang/exception in task above?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's assumed that task can't throw exception at this point.

Make sense to check it with something like this: static_assert(noexcept(task()))
Since this is available since C++17 only

void cv::gimpl::Task::run() {
m_f();
for (auto* dep : m_dependents) {
if (dep->m_ready_deps.fetch_add(1u) == dep->m_num_deps - 1) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks it's not memory_order::seq_cst for sure

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -11,6 +11,7 @@
#include <functional> // std::hash
#include <vector> // std::vector
#include <type_traits> // decay
#include <thread> // hardware_concurrency
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not needed any more

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

}
}

void assignMetaStubExec(Mag& mag, const RcDesc &rc, const cv::GRunArg::Meta &meta) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left mutex so far

@TolyaTalamanov
Copy link
Contributor Author

@dmatveev Could you have a look one more time, please?

Comment on lines 276 to 277
explicit use_threaded_executor(
uint32_t nthreads = static_cast<uint32_t>(cv::getNumThreads()))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably it's not ok to have cv::getNumThreads() called in the header.

Instead, you could introduce two constructors, a default one () with nthreads set to cv::getNumThreads() in its initializer list, and the value-based one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we just declare it here and move implementation to a cpp file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we just declare it here and move implementation to a cpp file?
In this case it's needed to call cv::getNumThreads() in header anyway

Comment on lines 23 to 25
void bindInArgExec(Mag& mag, const RcDesc &rc, const GRunArg &arg) {
if (rc.shape != GShape::GMAT)
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A strange mix of {, can you please stick to the BSD-style one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

}

void cv::gimpl::GThreadedExecutor::Output::post(cv::GRunArgP&&, const std::exception_ptr& e) {
if (e) { m_eptr = e; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the third indentation style I see in this file

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

// Notify every consumer about completion one of its dependencies
for (auto* consumer : m_consumers) {
const auto num_ready =
consumer->m_ready_producers.fetch_add(1, std::memory_order_relaxed) + 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't u use pre-increment here, again?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only because I specify memory_order there

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TolyaTalamanov did you find out which is the best order here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a good question, I'd say we need to guarantee that the new scheduled task will see all changes in magazine that have been made by its producers which means memory_order_acq_rel.

But since all magazine changes currently synchronized by mutex it's already guaranteed so memory_order_relaxed can be used until we get rid of the mutex on magazine :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'd vote for ++(int) still as to a more concise construct. BUT that's up to you

Copy link
Contributor Author

@TolyaTalamanov TolyaTalamanov Jan 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thing is operator++(int) doesn't accept std::memory_order which isn't default there so I can't use it :)

@dmatveev
Copy link
Contributor

No major problems, but I'd expect reviews from other first. If you align the indentation, we can merge this. Thanks!


// Count the number of last tasks
auto isLast = [](const std::shared_ptr<Task>& task) { return task->isLast(); };
const auto kNumLastsTasks =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it handle recursive task creation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what means "recursive" task creation, could you elaborate, please?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when tasks may spawn new (previously unknown) tasks, I believe. May be the case with loops, but let's put it aside for now

Copy link
Contributor Author

@TolyaTalamanov TolyaTalamanov Jan 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Task itself is an abstraction which handles dependencies so if it spawns new tasks recursively it's already taken into account...

};

void run(ExecutionState& state);
bool isLast() const { return m_consumers.empty(); }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could there be a data race for the vector of consumers here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once all tasks are created, m_consumers aren't changed, so different threads only perform read operations. Not obvious though

Copy link
Contributor

@smirnov-alexey smirnov-alexey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good

Copy link
Contributor

@dmatveev dmatveev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok to go if it isn't breaking anything right now. Let it live in parallel with the primary one. Will swap them someday

// Notify every consumer about completion one of its dependencies
for (auto* consumer : m_consumers) {
const auto num_ready =
consumer->m_ready_producers.fetch_add(1, std::memory_order_relaxed) + 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'd vote for ++(int) still as to a more concise construct. BUT that's up to you


// Count the number of last tasks
auto isLast = [](const std::shared_ptr<Task>& task) { return task->isLast(); };
const auto kNumLastsTasks =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when tasks may spawn new (previously unknown) tasks, I believe. May be the case with loops, but let's put it aside for now

@TolyaTalamanov
Copy link
Contributor Author

@asmorkalov Could you merge it, please?

@asmorkalov
Copy link
Contributor

Clang (MacOS) reports warning:

Run python3 $GIT_CACHE/warnings-handling.py
/Users/opencv-cn/GHA-OCV-1/_work/opencv/opencv/opencv/modules/gapi/test/gapi_sample_pipelines.cpp:92:39: warning: taking the max of unsigned zero and a value is always equal to the other value [-Wmax-unsigned-zero]

@asmorkalov
Copy link
Contributor

Android build failed:

 FAILED: modules/gapi/CMakeFiles/opencv_gapi.dir/src/executor/thread_pool.cpp.o 
ccache /opt/android-sdk/ndk/25.2.9519653/toolchains/llvm/prebuilt/linux-x86_64/bin/clang++ --target=armv7-none-linux-androideabi21 --sysroot=/opt/android-sdk/ndk/25.2.9519653/toolchains/llvm/prebuilt/linux-x86_64/sysroot -DCVAPI_EXPORTS -DOPENCV_WITH_ITT=1 -DTBB_SUPPRESS_DEPRECATED_MESSAGES=1 -DTBB_USE_GCC_BUILTINS=1 -D_USE_MATH_DEFINES -D__OPENCV_BUILD=1 -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -D__TBB_GCC_BUILTIN_ATOMICS_PRESENT=1 -I/home/ci/opencv/modules/gapi/include -I/home/ci/build/o4a/modules/gapi -I/home/ci/opencv/modules/core/include -I/home/ci/opencv/modules/flann/include -I/home/ci/opencv/modules/imgproc/include -I/home/ci/opencv/modules/dnn/include -I/home/ci/opencv/modules/features2d/include -I/home/ci/opencv/modules/calib3d/include -I/home/ci/opencv/modules/video/include -I/home/ci/opencv/modules/gapi/src -I/home/ci/opencv/modules/gapi/src/3rdparty/vasot/include -I/home/ci/opencv/3rdparty/ittnotify/include -I/home/ci/build/o4a/3rdparty/ade/ade-0.1.2d/sources/ade/include -isystem /home/ci/build/o4a -isystem /home/ci/build/o4a/3rdparty/tbb/oneTBB-2020.2/include -DANDROID -fdata-sections -ffunction-sections -funwind-tables -fstack-protector-strong -no-canonical-prefixes -D_FORTIFY_SOURCE=2 -march=armv7-a -mthumb -Wformat -Werror=format-security     -fsigned-char -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winconsistent-missing-override -Wno-delete-non-virtual-dtor -Wno-unnamed-type-template-args -Wno-comment -Wno-deprecated-enum-enum-conversion -Wno-deprecated-anon-enum-enum-conversion -fdiagnostics-show-option -Qunused-arguments  -fvisibility=hidden -fvisibility-inlines-hidden  -O3 -DNDEBUG   -DNDEBUG -fPIC -std=c++11 -MD -MT modules/gapi/CMakeFiles/opencv_gapi.dir/src/executor/thread_pool.cpp.o -MF modules/gapi/CMakeFiles/opencv_gapi.dir/src/executor/thread_pool.cpp.o.d -o modules/gapi/CMakeFiles/opencv_gapi.dir/src/executor/thread_pool.cpp.o -c /home/ci/opencv/modules/gapi/src/executor/thread_pool.cpp
/home/ci/opencv/modules/gapi/src/executor/thread_pool.cpp:51:5: error: use of undeclared identifier 'GAPI_Assert'
    GAPI_Assert(task);

@TolyaTalamanov
Copy link
Contributor Author

@asmorkalov, thanks! Build has passed

@asmorkalov asmorkalov merged commit 8e43c8f into opencv:4.x Jan 30, 2024
This was referenced Feb 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants