-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add expm
(exponential matrix) implementation
#156
Comments
SciPy has three functions
and Links |
The Julia language implementation is the following: |
You can try https://github.com/SuperFluffy/rust-expm. I have to be honest: I implemented it more for fun and never used it in production. So YMMV. |
@ethanhs see also this issue |
Assuming this issue is still open I'd love to take a stab at it. |
@matthagan15 If you are working on the code, I will write tests, debug or help in any way I can (FYI new to Rust). |
@naren-nallapareddy I have yet to get around to this, I'm still struggling to understand how the degree of the pade approximation is chosen & how the denominator is implemented (I believe you just compute the inverse of the polynomial of the matrix but a full inverse seems like a lot so I'm not sure). |
status update: I have a bare-bones implementation of the norm estimation needed that seems to be working with preliminary testing (around 90% accuracy w/ exact values) and I have some expm approximations that seem to working with a single 3x3 test matrix. Some things still needed to be done:
|
After a busy few days, it seems that expm is working for 2x2 complex rotation matrices! AKA picking a random angle theta, we know that Exp[i * theta * pauli_x] = cos[theta] I + i Sin[theta] pauli_x, so I tested this over random values of theta. The higher order approximations give results within a factor of 2 or so of machine precision (
After I have it in a presentable state I'll try to break it down into a few chunks (total changes are currently sitting at around 1100 lines of code) and send some pull requests! |
so I've simplified the code a fair bit, but unfortunately I've found a bug which is preventing me from attempting to send it out. I'm not sure what the bug is, but there seems to be an error introduced that scales with the dimension of the input matrix, at around 200x200 matrices each entry has an error on the order of 966 x f64::EPSILON, which is about 10^-12, which leads to a one norm error of at least 2 x 10^-10. This may not seem significant, but the per entry error is around 20 times worse than scipy's implementation, and further scipy's per entry error does not increase with the dimension. I used this in a monte carlo numerical test and I found that under a critical dimension of about 50 or so (when the error is comparable to scipys) the numerics worked beautifully but after dimension of around 150 I noticed my output became incredibly noisy, indicating the errors accumulated to my working precision. I'm currently trying to determine what could be causing this, it is not a bug with my pade approximations as it seems to be occuring at all approximation levels (meaning it doesn't seem to be a dumb code mistake). Further because the earlier approximations do not use any scaling + squaring it can't be an issue with that. I've tested the inversions used and the inverses are working as expected. It may be an issue with one of the subroutines, but in order to unit test it I have to check it against the scipy implementations. Hopefully will find this soon! |
did some tweaks, I believe the error mentioned previously is not an algorithmic implementation issue as it does not appear with sparse matrices. There are still some TODO's which I personally have: smartly switch from exact 1-norm calculation to norm estimation (which I also have implemented on my fork) for large matrices, which is what scipy does. The pull request is located here #352 for anyone curious. |
https://en.wikipedia.org/wiki/Matrix_exponential
The text was updated successfully, but these errors were encountered: