-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] OutOPfMemory Error in propagator PropGraphCumulative #932
Comments
Removing that line should not reduce the memory consumption :
What values are you using for -Xmx -Xms VM arguments when having out of memory errors ? |
My VM options are -Xmx28g -Xms28g. I don't think this is the problem... I have a data, which finish by throwing OutOfMemory error. After removing that line the resolution finish successfully. Maybe my considerations are wrong, but currently I cannot explain it another way. I also have data, which fails by throwing OutOfMemory error although that line is removed. In this case it help to us to remove backtrackable graph. I changed PropGraphCumulative propagator the similar way as it work in Choco3 version. This changes causes, that the resolution succeed. Does it exist some more conventional solution for this problem? Or would be possible to make using of backtrackable graph optional? |
I do not have the details of your problem and each case is different. However, I still feel that saving edges during solving should not be the solution (because it starts with the full graph, so memory is consumed already and if filtering is weak, you may not be able to remove many edges). The fact that you encounter similar issues even without the break line confirms it. If you are able to remove many edges at root node, it may be that the initial domains are too large (e.g. [0, horizon] for all tasks) and therefore you would benefit from setting a more restricted initial domain. Otherwise, maybe the graph-based approach is just not suited to your case. The graph approach is already optional, you can use the following signature (passing incremental parameter as false) to disable it : |
My problem is scheduling of aircraft manufacturing. The model is huge and quite complicated. We face the OutOfMemory error in many places. All our testing data are solvable by choco3. Without fix of this issue, we are unable to use choco 4. I studied a code about backtrackable data structures. It looks, I was wrong at the beginning. The backtrackable data are saved only when they changes. If it works so simply, the recursion level, when edges are removed, would not influence the amount of consumed memory. A bitset is saved as a set of longs and one long saves more values. When one of them changes, a new value of long is saved. This means, that values, which are in the same long and are not changed, are duplicated. When we need to remove M edges and remove them in different recursion level, we need 2 x M x |long| memory. If we remove more edges in one recursion level, it can happen, that we need just one long for saving two or more removed edges. In that case we save a memory. If we limit number of removing edges in one recursion level, it can save less removed edges on one place and it consumes more memory. I admit, that this is not significant in many cases, but our problem is specific by using many cumulative-constraints and big graphs at the beginning. That's why it can helps us to remove that line. If we use non-incremental cumulative-constrain, we lose information saved in graph. It caused more time consuption and resolution is much slower. If we replace backtrackable graph in incremental cumulative-constraints by non-backtrackable graph - as it is used in choco 3 - all our testing problems are solvable. |
In the choco 3 approach, the graph was computed only once at root node and never edited afterward. If no filtering occurs at root node, it is equivalent to having no graph at all. Therefore, if no graph at all is not satisfying, but non-dynamic root graph (choco 3) is useful, it means that a lot of filtering occurs at root node, which tends to indicate that initial domains are probably too large. Maybe they are then restricted by unary constraints e.g. x > 42. We always recommend to build initial domains as tight as possible. We will investigate the tradeoffs of alternative approaches. In the meantime, you can create a copy the choco 3 propagator, add it to your project and use it within choco 4. |
Hello @kristynak . I have some issues about memory usage in some packing problems as well and turns out that for big problems, most of my memory goes in the IStateBitSet I use. In practice, they are mostly zeroed except a few clusters at the high indices. I am working on a sparse implementation of IStateBitSet. This may help you depending on your density of 1s. You can verify that using a profiler/debugger. |
Hello Fabien, it sounds great! This memory consumption by IStateBitSet for arrays full of zeros were also my nightmare number. Maybe it allows to us use origin PropGraphCumulative propagator in choco4. |
PropGraphCumulative is a propagator used by cumulative condition. This propagator uses a graph, which is backtrackable. Vertices of the graph are tasks and there is an edge between two vertices if their tasks overlap. When we go forward, the graph is getting smaller. When we backtrack, we have to return some edges to the graph. Method PropGraphCumulative.propagate(int evtmask) removes edges for every two vertices, when they stop to overlap.
Generally, maximum number of edges is O(N^2), where N is number of vertices (tasks). The while-cycle is break after 2*N vertices is searched. This means that some extra edges stays in the graph although they should be removed. This doesn’t influence a result but consumes a memory. When we need to schedule a lot tasks or we have more cumulative-constraints, it can cause OutOfMemory Error.
Suggested solution is remove the line number … but maybe it needs deepest research.
The text was updated successfully, but these errors were encountered: