forked from kathoef/MPI-Singularity-POCs
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathslurm-1451021.out
46 lines (40 loc) · 1.35 KB
/
slurm-1451021.out
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
Currently Loaded Modules:
1) singularity/3.6.4
neshcl103
neshcl104
STOP NORMAL END
STOP NORMAL END
Slurm Job Summary
*****************
- General information:
date = Wed Jun 23 14:09:43 CEST 2021
hostname = neshcl103
- Job information:
JobId = 1451021
JobName = singularity-mpich.sh
UserId = smomw260(57366)
Account = smomw260
Partition = cluster
QOS = normal
NodeList = neshcl[103-104]
Features = (null)
Command = /gxfs_home/geomar/smomw260/github/MPI-Singularity-PoC/MITgcm_container/singularity-mpich.sh
WorkDir = /gxfs_home/geomar/smomw260/github/MPI-Singularity-PoC/MITgcm_container
StdOut = /gxfs_home/geomar/smomw260/github/MPI-Singularity-PoC/MITgcm_container/slurm-1451021.out
StdErr = /gxfs_home/geomar/smomw260/github/MPI-Singularity-PoC/MITgcm_container/slurm-1451021.out
- Requested resources:
Timelimit = 00:15:00 ( 900s )
MinMemoryNode = ( 39.000M )
NumNodes = 2
NumCPUs = 2
NumTasks = 2
CPUs/Task = 1
TresPerNode =
- Used resources:
RunTime = 00:01:39 ( 99s )
MaxRSS = 51940K ( 50.723M )
====================
- Important conclusions and remarks:
* !!! Please, always check if the number of requested cores and nodes matches the need of your program/code !!!
* !!! Less than 20% of requested walltime used !!! Please, adapt your batch script.
(null)