Skip to content

Batch support for solve function in all backends #1705

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Jul 31, 2021

Conversation

pavanky
Copy link
Member

@pavanky pavanky commented Jan 9, 2017

@pavanky pavanky added this to the v3.5.0 milestone Jan 9, 2017
@mlloreda mlloreda modified the milestones: v3.5.1, v3.5.0 May 22, 2017
@pavanky pavanky modified the milestones: v3.5.1, v3.6.0 Jun 16, 2017
@pavanky pavanky changed the base branch from devel to master August 23, 2017 15:22
@umar456 umar456 removed this from the v3.6.0 milestone Feb 27, 2018
@mchandra
Copy link
Contributor

mchandra commented Apr 3, 2018

Hi, could this be merged into master? Would really help. Thanks.

@9prady9
Copy link
Member

9prady9 commented Apr 3, 2018

@pavanky How much more work is left for this feature to be complete ?

@mchandra
Copy link
Contributor

mchandra commented Apr 3, 2018

I think this PR implements batch support for CUDA, not for OpenCL and the CPU backends (which don't have batch support implemented at the backend library level). I vote for merging as is -- with just CUDA batch support.

@WilliamTambellini
Copy link
Contributor

Watch up batched gemm in recent mkl :
#2146

@umar456 umar456 changed the title [WIP]: Batch support for linear algebra functions Batch support for linear algebra functions Jul 30, 2021
@umar456 umar456 force-pushed the linalg_batch branch 2 times, most recently from 4694594 to b85cf13 Compare July 30, 2021 03:46
Copy link
Member

@9prady9 9prady9 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please also merge your solve related commits right after pavan's. The MKL option change is not needed, so that commit can be removed - please check the relevant change comment.

If the solve batch support change doesn't work without pinned memory commit, you can merge that as well into the commit that has all the solve batch support changes you added after pavan.

I am changing the title to reflect that this adds batch support for solve alone.

@9prady9 9prady9 changed the title Batch support for linear algebra functions Batch support for solve functions in all backends Jul 30, 2021
@9prady9 9prady9 changed the title Batch support for solve functions in all backends Batch support for solve function in all backends Jul 30, 2021
@9prady9 9prady9 added this to the 3.8.1 milestone Jul 30, 2021
Add support for batching to the CPU and OpenCL backends. Uses
the MKL batching functions when MKL is enabled otherwise it
iterates over all of the slices if using LAPACK.
We will first make sure that the getrf_batch_strided function is
available in MKL to determine if the batch functionality can be
used in ArrayFire. If it is available we will define the
AF_USE_MKL_BATCH function to enable the batching functions.
@9prady9 9prady9 merged commit b6680d5 into arrayfire:master Jul 31, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

solve() inside gfor() for a large number of independent matrices
6 participants