Replies: 6 comments 1 reply
-
Regarding the strategy(You could also run the solver with the default choice of adaptive_rho_fraction and see what iteration count the solver is on before the first factorisation), I am not exactly sure where should I find this information, pasted below: ( I have changed 'PRINT_INTERVAL' from 200 to 1)
problem: variables n = 671, constraints m = 1339 iter objective pri res dua res rho time status: solved |
Beta Was this translation helpful? Give feedback.
-
I saw the iteration count 21, rho changes from 1.00e-01(default) to 1.51e-02. So should I set adaptive_rho_interval as 21? I don't know the reason and what should I do to make my program better. |
Beta Was this translation helpful? Give feedback.
-
The solver determined after 21 iterations that it made sense to try a refactorization, based on the factorization time it took at iteration 0 and then the subsequent solve times using that factorization for the first 21 iterations. You should probably run it a few times just to confirm that number, since the factorization and solve times are not deterministic (i.e. you might have other stuff running in the background on the same machine). It is basically telling you that the solver thought that after 21 iterations was a good point to refactor, at least based on the default settings we have for I suggested before an interval of around 25-50. I would probably just try both and see which is faster for your particular problem on average. |
Beta Was this translation helpful? Give feedback.
-
Really appreciate for your reply!!! |
Beta Was this translation helpful? Give feedback.
-
Yes, it means that 1.5e-2 is probably a better value. Different values will produce different rates of convergence. The solver tries to make a guess at what would be the best rho based on its overall progress and only after a number of steps or total run time that you specify. Our update rule for rho is only a heuristic, although generally quite a good one. If you are really determined to optimize the rate of convergence, then you could also try solving with many different values of rho (without allowing adaption) and just pick the best one. It will only be "best" for that particular problem though. For similar problems (e.g. with slightly different right hand sides) it's just a good guess. If you have lots of similar problems, I would suggest :
|
Beta Was this translation helpful? Give feedback.
-
I've tried "solving with many different values of rho (without allowing adaption)", the rates of convergence are changing. So different rho really have large influence on the rate of convergence. But I found that the solution is not good enough(I have a benchmark for the solution) when I use 1.4e-2 as rho. The problem convergence at iteration is 100, it's really less than other rho's choice. I think it may be the reason of that is eps_rel is too large, so I change eps_rel from 1e-3 to 5e-4, the solution becomes good enough. The iterations increase from 100 to 140. |
Beta Was this translation helpful? Give feedback.
-
To avoid this you can set
adaptive_rho_fraction
to zero, and then setadaptive_rho_interval
to some non-zero value. This triggers a rho update / refactorization (or at least a check to see if a rho update will help) at a deterministic iteration number, and the behaviour of the solver will then be completely deterministic.The best choice depends on the problem, but I would suggest that something like 25-50 is reasonable. If you make it too small then sometimes the solver's idea of the
best
choice of rho will not have stabilised, which might give you erratic behaviour or lead to excessive rho updates. Making it too large means that you potentially waste a lot of time initially with a poor choice of rho before the first update is triggered.You could also run the solver with the default choice of
adaptive_rho_fraction
and see what iteration count the solver is on before the first factorisation. Then set a value near that as theadaptive_rho_interval
as a starting point. A strategy like this might make sense if your problem has extreme sparsity or density.Originally posted by @goulart-paul in #378 (comment)
Beta Was this translation helpful? Give feedback.
All reactions