-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache structs to save memory use on repeated calls to _taylorinteg!
#203
base: main
Are you sure you want to change the base?
Conversation
_taylorinteg!
_taylorinteg!
Thanks @PerezHz for the nice addition. Do you have some comparisons/benchmarks? I'll have a look into it in the next few days! |
…ept for scalar case), plus minor fixes
@lbenet I'm trying out some changes in this PR which will help re-use more memory while making this cache concept more user-friendly. I'll run benchmarks once this is ready. |
@PerezHz: If I create a cache object once and then reuse it, do I have to update de |
Thank you for the suggestion! Right now, the workflow with these caches looks as follows: # allocation
cache = TaylorIntegration.init_cache(Val(true), t0, q0, maxsteps, order)
# handle parsing
parse_eqs, rv = TaylorIntegration._determine_parsing!(true, f!, cache.t, cache.x, cache.dx, params);
# first integration
sol = taylorinteg!(Val(true), f!, q0, t0, tmax, abstol, rv, cache, params; parse_eqs, maxsteps)
# update
t0 = # new time
q0 = # new initial condition
# second integration
sol2 = taylorinteg!(Val(true), f!, q0, t0, tmax, abstol, rv, cache, params; parse_eqs, maxsteps) That is, the cache is initialized only once, and the update to the new initial condition is handled internally by |
This PR introduces some cache structs which allow reusing memory on repeated calls to
_taylorinteg!
.