Fast tensor operations using a convenient Einstein index notation.
Documentation | Build Status |
---|---|
Digital Object Identifier | Downloads |
---|---|
-
Switched to CUDA.jl instead of CuArrays.jl, which effectively restricts support to Julia 1.4 and higher.
-
The default cache size for intermediate results is now the minimum of either 4GB or one quarter of your total memory (obtained via
Sys.total_memory()
). Furthermore, the structure (i.e.size
) andeltype
of the temporaries is now also used as lookup key in the LRU cache, such that you can run the same code on different objects with different sizes or element types, without constantly having to reallocate the temporaries. Finally, the task rather thanthreadid
is used to make the cache compatible with concurrency at any level.As a consequence, different objects for the same temporary location can now be cached, such that the cache can grow out of size quickly. Once the cache is not able to hold all the temporary objects needed for your simulation, it might actually deteriorate perfomance, and you might be better off disabling the cache alltogether with
TensorOperations.disable_cache()
.
WARNING: TensorOperations 3.0 contains breaking changes if you did implement support for custom array / tensor types by overloading
checked_similar_from_indices
etc.
TensorOperations.jl is mostly used through the @tensor
macro which allows one to express
a given operation in terms of
index notation format, a.k.a.
Einstein notation
(using Einstein's summation convention).
using TensorOperations
α=randn()
A=randn(5,5,5,5,5,5)
B=randn(5,5,5)
C=randn(5,5,5)
D=zeros(5,5,5)
@tensor begin
D[a,b,c] = A[a,e,f,c,f,g]*B[g,b,e] + α*C[c,a,b]
E[a,b,c] := A[a,e,f,c,f,g]*B[g,b,e] + α*C[c,a,b]
end
In the second to last line, the result of the operation will be stored in the preallocated
array D
, whereas the last line uses a different assignment operator :=
in order to
define and allocate a new array E
of the correct size. The contents of D
and E
will
be equal.
For more information, please see the documentation.