Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calculating lots of isochrones can lead to "infinite loops"? #1897

Open
1 task done
lenalebt opened this issue Nov 12, 2024 · 1 comment
Open
1 task done

Calculating lots of isochrones can lead to "infinite loops"? #1897

lenalebt opened this issue Nov 12, 2024 · 1 comment

Comments

@lenalebt
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

Problem description

I am calculating lots of isochrones for different purposes, basically hammering ORS with isochrones requests for days in a row. In most cases, everything works just fine. Sometimes, ORS does not stop calculating an isochrone, which can lead to the whole server running crazy on CPU cycles on all cores without any perceivable output. The threads going crazy live for hours (basically until I manually stop ORS), although the Isos are "only" being calculated for up to 60 minutes (car). I would expect them to stop calculating when a certain size has been reached.

I do not yet have an idea what is actually happening. It does not happen immediately, but it's basically guaranteed to happen after a few hours. I find it hard to debug because I could not yet see for which request this might have started actually. I'm in for some debugging, but before I go deeper here:

  • Is this behaviour known in some way? I could not find anything like it in the github issues, but maybe I just did not find it
  • Do you have any ideas for debugging it specifically? I mean, I could try to get a thread dump or anything like that, just did not yet do it because it's running inside a container and I would need to fiddle a bit about how to get it "properly". But maybe you do have better ideas already.

Proposed solution

More debugging needed, wanted to ask first and then invest more time.

Additional context

ORS 8.2.0 from docker

ORS Settings:

ors:
  #  cors:
  #    allowed_origins: "*"
  #    allowed_headers: Content-Type, X-Requested-With, accept, Origin, Access-Control-Request-Method, Access-Control-Request-Headers, Authorization
  #    preflight_max_age: 600
  #  messages:
  #  ##### ORS endpoints settings #####
  endpoints:
    routing:
      enabled: true
    matrix:
      enabled: true
      maximum_visited_nodes: 10000000
      maximum_search_radius: 70
    isochrones:
      enabled: true
      allow_compute_area: false
      maximum_intervals: 180
      maximum_range_distance_default: 70000
      maximum_range_time_default: 7200
    fastisochrones:
      enabled: true

  #  ##### ORS engine settings #####
  engine:
    source_file: /home/ors/files/area.osm.pbf
    init_threads: 1
    preparation_mode: false
    graphs_root_path: ./graphs
    graphs_data_access: RAM_STORE
    elevation:
      preprocessed: false
      data_access: MMAP
      cache_clear: false
      provider: srtm
      cache_path: ./elevation_cache
    profile_default:
      maximum_snapping_radius: 70
    profiles:
      car:
        enabled: true
        profile: driving-car
        encoder_options:
          turn_costs: true
          block_fords: false
          use_acceleration: true
        preparation:
          min_network_size: 200
          methods:
            ch:
              enabled: true
              threads: 1
              weightings: fastest
            lm:
              enabled: false
              threads: 1
              weightings: fastest,shortest
              landmarks: 16
            core:
              enabled: true
              threads: 1
              weightings: fastest,shortest
              landmarks: 64
              lmsets: highways;allow_all
        execution:
          methods:
            lm:
              active_landmarks: 6
            core:
              active_landmarks: 6
        ext_storages:
          WayCategory:
          HeavyVehicle:
          WaySurfaceType:
          RoadAccessRestrictions:
            use_for_warnings: true
      bike-regular:
        enabled: true
        profile: cycling-regular
        encoder_options:
          consider_elevation: true
          turn_costs: true
        ext_storages:
          WayCategory:
          WaySurfaceType:
          HillIndex:
          TrailDifficulty:
      walking:
        enabled: true
        profile: foot-walking
        encoder_options:
          block_fords: false
        ext_storages:
          WayCategory:
          WaySurfaceType:
          HillIndex:
          TrailDifficulty:
      wheelchair:
        enabled: true
        profile: wheelchair
        encoder_options:
          block_fords: true
        maximum_snapping_radius: 50
        ext_storages:
          WayCategory:
          WaySurfaceType:
          Wheelchair:
            KerbsOnCrossings: true
          OsmId:

Forum Topic Link

No response

@sfendrich
Copy link
Contributor

Thanks for reporting. We haven't faced this behavior so far. Normally ORS should stop calculations once maximum_visited_nodes is exceeded.

Things you could try to narrow down the issue:

  • Increase the log-level of ORS to get more information in the log file.
  • Check whether the OS is running out of memory and swapping, which would drastically slow down ORS.
  • Attach a profiler such as VisualVM to ORS to get information about where ORS gets stuck.
  • Check whether the JVM is running out of memory and maybe starting to collect garbage a lot; can be done with VisualVM, too
  • Maybe specify a smaller value of maximum_visited_nodes for isochrones in your config file.

If you have useful information in your log-file or a specific request that causes the issue you may also post it here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants