-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generalized eigenvalue problem #619
Comments
I started implementing the generalized eigenvalue problem but stopped halfway because I had to take care of other things. If there is a strong need for this, I could try to finish my old code. This would probably produce a generalized Schur decomposition (aka QZ decomposition). More advenced stuff (reordering) would need (much) further work. Anyhow, after writing the QZ algorithm in python, for speed reasons it would be highly desirable to encode it in C afterwards and perhaps include it in something like arb. |
Personally, I would have a use for it in my baryrat package where it would be very useful for computing roots and poles of rational functions. Yes, it seems that QZ is the algorithm of choice for this problem. I have personally never implemented it but if you have a draft version I would be willing to help with testing or implementation. I thought the matrix decomposition algorithms in mpmath are implemented in pure Python anyway? I'm not too familiar with the internals though. I don't know what "arb" is in this context. |
I can try to code a preliminary version of the generalized Schur decomposition, but this will take one to three weeks as the details are crazy complicated. The reason I mention arb is that while the pure python version of QZ will probably work in reasonably time on 10x10 matrices and below, it will probably choke on 100x100 matrices and above, and that is where most technical applications are. |
During a quick search I didn't find any software package, no matter the language, that can solve generalized eigenvalue problems in arbitrary precision. It may exist somewhere out there, but even just having a basic pure Python version working in mpmath would be a huge benefit given that lack of software. Also I did use the standard eigenvalue Python routines in mpmath for matrices approaching the size you mention (not sure what the largest I tried was but certainly much more than 10x10) and it worked generally fine. Does a C implementation help that much when the dominating cost are the arithmetic operations on numbers with many digits? In any case, I would be interested to hear about any progress. |
Is there a reason why you cannot reduce this to the standard eigenvalue problem (your M is not invertible?) The numerical eigenvalue code in Arb is basically a straightforward C translation of @thartmann15's code in mpmath, and yes, the C version is 50x or so faster. Other than that, they are numerically pretty much the same; the only significant difference between mpmath and Arb for a 100x100 matrix is that it will take 50x longer in mpmath. On top of this there is also code in Arb to certify eigenvalues, but the numerical code works well in isolation. I have a writeup about it here: https://fredrikj.net/blog/2018/12/eigenvalues-in-arb/ If someone were to implement code for the generalized eigenvalue problem in Python, I'd be interested in a C translation as well. Speeding up mpmath is possible. I'm not too keen on optimizing Python code for speed (mpmath is already a mess internally because of stupid tricks to get around Python's slowness, some of them now obsolete technical debt), but it would be nice to have an optional Arb-based backend so that users could have the usual mpmath interface with Arb-like speed. It should be possible to make the C code even faster too, but it requires a different algorithm (a version of the QR algorithm with delayed block reduction so that matrix-matrix multiplication can be used as a kernel). |
Yes, M is singular in my application. Thanks for the background information on arb and performance. 50x is an impressive speedup indeed. Is that roughy independent of the number of digits used, or what kind of precision are we talking about here? In my applications I've been using around In any case I may have to look into arb. You are saying that at the moment mpmath and arb are entirely separate, there is no bridge between the two? |
Yes, you should see that kind of difference at 100 digits.
They are separate, but some interoperability is possible using python-flint (https://github.com/fredrik-johansson/python-flint). Indeed, this works:
(See the docstring for |
I've dug into the performance question a bit and done some benchmarking. I noticed that arithmetic operations on gmpy real/complex numbers are across the board around 10x faster than operations on the corresponding mpmath types. I assume this is because gmpy2 uses a floating point type which is completely implemented in C, whereas mpmath uses the bignum integer from gmpy2 but builds the floating point implementation on top of that in pure Python? Encouraged by that, I tweaked the mpmath QR code (as a simple test case) to directly accept numpy arrays of gmpy2 numbers. Here are the benchmarks for computing the QR decomposition of some matrices: 50x50 matrix, dps=100: mpmath: 1.18 s 100x100 matrix, dps=100: mpmath: 9.29 s So in both cases, we get roughly a 12x speedup simply by switching from Is there any chance to take advantage of this, for instance by implementing a new mpmath context which uses |
In my testing, I got similar results: Numpy array with mpmath content give ca. a 10x speed improvement for matrix operations. The main question is if one is willing to make numpy a hard dependency for mpmath. This is more a political question and less a technical one. But on the other hand, using arb matrix operations directly give a further speedup, so the numpy question is kind of moot. This kind of question pops up every now and then: issue 561 |
That's surprising to hear for me. I'm quite sure that in my test, the speedup is almost entirely due to the use of Personally I wouldn't worry about a numpy dependency since it is the foundation of the entire scientific Python stack, I don't think anyone does computations in Python without having numpy installed. But that's just my view on it. |
I'd like to compare my results to flint/arb to get an idea of how much additional speedup is achievable there, but it seems to be quite hard to install. Are there pre-built packages available? This is my problem with depending on flint: if I want to package my own library, it's a problem to depend on other software which is difficult to install. On the other hand, numpy, gmpy2 and mpmath are easy to install via pip/conda. So if I can achieve comparable, if not quite as good speedups using only those tools, I think it's worth pursuing. |
https://anaconda.org/conda-forge/python-flint should install all dependencies for you. |
Thanks! That did work for me, but I haven't gotten around to doing benchmarks using flint yet. I still want to do that soon. For now I've repackaged the linear algebra routines from mpmath, but using the numpy/gmpy2 backend, as a separate package: https://github.com/c-f-h/flamp This is mostly for my own consumption at this point, though I imagine someone else might find it useful too. The algorithm I was working on and which uses some of these routines saw a speedup of around 30x (!) by switching to this new backend, which I'm obviously very happy with. If there's any interest in re-integrating this back into mpmath in some form, I'd be happy to help with that. The algorithms required only minor tweaks to work in the new environment. |
Here is a status update on my attempts to implement the generalized eigenvalue problem in mpmath. I have read the relevant passages in some books, looked at my old attempts and had a look at lapack/eispack source code.
I have completed step 1 and step 2 should be relatively straightforward for complex matrices. I am now working on step 3. This might take some time and I will give an update in a week or two. Concerning flamp: This package is definitly nice to have. There are many speed-conscious users of high precision linear algebra. One small correction: I have only programmed the eigen/svd routines of mpmath. The other routines (lu_solve,qr_solve,cholesky_solve,lu,qr,cholesky, ...) are works by other authors. |
That's great news. Thanks also for your last remark, I changed the attribution in flamp correspondingly. Here are the promised benchmarks including python-flint/arb. This is for computing the eigenvalues and right eigenvectors of a n x n real matrix with dps=50 and dps=100.
Percentages are relative to the arb baseline. So flamp gets around a 9-11x speedup over mpmath, and arb fetches another roughly 2.5x speedup on top of that. That's not too shabby considering arb is pure C and flamp essentially pure Python/only using gmpy2. |
Concerning flamp: On second thought it would be nice to have something like flamp (numpy arrays filled with mpmath or gmpy2 objects) directly inside mpmath. There is certainly a need for this: see issue 618. Mpmath has the concept of backends, perhaps one could implement an appropriate backend ? I read in a recent blog post that there is a vague plan to implement arb as a backend in mpmath far in the future. If this would be realized, one could at the same point look at the necessary changes to the internal plumbings at mpmath and do the necessary steps. Perhaps one could do a dynamical loading of the dependencies so that mpmaths other backends still run without numpy or arb installed ? |
Yes, I think it's very possible to integrate this back into mpmath. My first idea was to do it as a new context, but I don't know the mpmath internals well. It seems backends are a separate concept from contexts? Here are a few things that can be done immediately to improve compatibility without breaking anything:
These changes alone account for a large portion of the tweaks I had to make to port the algorithms. |
A "backend" is really just an implementation (types, basic methods) of a context.
Doable.
Historically the methods were needed for compatibility since all types didn't provide the methods, but they are still needed in places to ensure that results have the right type, e.g.
mpmath matrices are matrices, not arrays, so there is no reason why ordinary multiplication shouldn't work. If you want a backend using numpy arrays for matrices, safer to wrap them than to use the numpy type directly. |
I understand. In fact the context functions don't need to be removed, but since in the relevant routines in the
I get the sentiment. Just like for the previous point, it does no harm to keep the old semantics (deprecation is unnecessary), but perhaps the routines in the |
As far as I could tell, there is no way in mpmath to solve the generalized eigenvalue problem
Ax = lambda Mx
. Is there any hope of adding this functionality?The text was updated successfully, but these errors were encountered: