-
Notifications
You must be signed in to change notification settings - Fork 548
Batch support for solve function in all backends #1705
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hi, could this be merged into master? Would really help. Thanks. |
@pavanky How much more work is left for this feature to be complete ? |
I think this PR implements batch support for CUDA, not for OpenCL and the CPU backends (which don't have batch support implemented at the backend library level). I vote for merging as is -- with just CUDA batch support. |
Watch up batched gemm in recent mkl : |
4694594
to
b85cf13
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please also merge your solve related commits right after pavan's. The MKL
option change is not needed, so that commit can be removed - please check the relevant change comment.
If the solve batch support change doesn't work without pinned memory commit, you can merge that as well into the commit that has all the solve batch support changes you added after pavan.
I am changing the title to reflect that this adds batch support for solve alone.
Add support for batching to the CPU and OpenCL backends. Uses the MKL batching functions when MKL is enabled otherwise it iterates over all of the slices if using LAPACK.
We will first make sure that the getrf_batch_strided function is available in MKL to determine if the batch functionality can be used in ArrayFire. If it is available we will define the AF_USE_MKL_BATCH function to enable the batching functions.
When completed: