Skip to content
Wouter Knoben edited this page May 31, 2021 · 9 revisions

Frequently asked questions

Welcome to the summaWorkflow_public wiki! Here we outline some common questions and how to deal with them. For questions specific to the computational infrastructure available at the University of Saskatchewan, you can also look here: FAQ specific to University of Saskatchewan.

Content:


Re-use already downloaded forcing or parameter data for a new domain

You can specify the path to an existing data folder in the control file for your new domain and selectively running scripts from the workflow. I.e. do not run the download script for the existing data but start at the relevant pre-processing step.


SUMMA output control

Determine which simulaiton variables SUMMA outputs by changing the outputControl.txt file in the SUMMA settings folder. A full overview of SUMMA variables can be found in the SUMMA source code, in file SUMMA/build/source/dshare/var_lookup.f90. See here for more details: https://summa.readthedocs.io/en/latest/input_output/SUMMA_input/#output-control-file


Make a SUMMA initial conditions file

Run SUMMA with the -r e argument for as long a period as you want to use to determine initial states. This will give a (r)estart file at the end of your simulation. Other restart options are year, month, day or never, giving you restart files at the start of each new year, month or day. Optionally use the script to be added in summaWorkflow_public/0_tools to:

  • Ensure canopy is empty
  • Remove any existing snow layers
  • to be added


Connect SUMMA to a calibration algorithm

SUMMA uses a hierarchical approach to the reading of parameter files (https://summa.readthedocs.io/en/latest/input_output/SUMMA_input/#attribute-and-parameter-files). In essence this means it reads the parameter files in sequence. First, it reads the GRU and HRU parameters from basinParamInfo.txt and localParamInfo.txt respectively. Next, certain HRU parameter values are overwritten with values read from the lookup tables for soil and vegetation parameters. Finally, SUMMA checks the trial parameters .nc file and overwrites any existing GRU or HRU-level parameters with the values found in this value. If the trial parameter file is empty, no existing values are changed.

Connect SUMMA to a calibration algorithm by (1) determining which parameters you want to calibrate and (2) letting the calibration algorithm write the values it wants to test to the trialParams.nc file. A full overview of SUMMA parameters can be found in the SUMMA source code, in file SUMMA/build/source/dshare/var_lookup.f90. The existing code to create a trial parameter file in summaWorkflow_public/4_model_input/SUMMA/3e_trial_parameters can form the basis for step 2.


Parallelize SUMMA model runs

Run the SUMMA executable with the optional argument -g. -g takes two inputs: start_gru and num_gru which are respectively the index of the GRU to start this specific run at (counted as indices in the attributes .nc file, so this variable has a minimum value of 1 and a maximum value of the nubmer of GRUs in your domain), and the number of GRUs to run from that starting index. Some examples:

summa.exe -g 1 1 -m path/to/filemanager.txt

Starts the SUMMA run at GRU 1 and runs only that GRU.

summa.exe -g 1 50 -m path/to/filemanager.txt

Starts the SUMMA run at GRU 1 and runs 50 GRUs (including the first), ending with GRU 50.

summa.exe -g 50 1 -m path/to/filemanager.txt

Starts the SUMMA run at GRU 50 and runs only that GRU.

summa.exe -g 50 50 -m path/to/filemanager.txt

Starts the SUMMA run at GRU 50 and runs 50 GRUs, ending with GRU 100.

Parallelizing such runs on a High Performance Computing cluster can look as follows (SLURM example). Submit a job with the --array option (technically these arrays are 0-indexed but starting them from 1 may be easier to conceptualize):

sbatch --array=1-10 run_summa.sh

Inside the run_summa.sh script, use the $SLURM_ARRAY_TASK_ID variable to set which GRUs to run:

#!/bin/bash
#SBATCH --ntasks=1
# ... more SBATCH stuff if needed

# Define the GRU settings
gruMax=198 # Total number of GRUs in the domain
gruCount=20 # Number of GRUs per chunk

# Get the array ID for further use
offset=$SLURM_ARRAY_TASK_ID 

# Start at 1 for array ID 1, 21 for array ID 2, etc
gruStart=$(( 1 + gruCount*(offset-1) ))

# Check that we don't specify too many basins
if [ $(( gruStart+gruCount )) -gt $gruMax ]; then
 gruCount=$(( gruMax-gruStart+1 ))
fi

# Run SUMMA
summa.exe -g $gruStart $gruCount -m /path/to/filemanager.txt 


Summarize parallel SUMMA runs log files

SUMMA produces log files for each set of GRUs when the GRUs are run in parallel with option -g. These files can be summarized into an easy-to-read overview by using the script to be added in the summaWorkflow_public/0_tools folder.


Merge multiple SUMMA outputs into a single file

SUMMA runs split into multiple chunks with the -g argument results in multiple output files that each contain the simulations for a single chunk of GRUs. These output files can be merged into a single file by using the script to be added in the summaWorkflow_public/0_tools folder.


Parallelize mizuRoute model runs

to be added

Clone this wiki locally