Skip to content

Test OpenVisus performance

scrgiorgio edited this page Aug 22, 2018 · 1 revision

Set the visus (convert) command:

VISUS=build/RelWithDebInfo/visus.exe

Test encoding/decoding speed

Simply encode/decode single blocks (coming from a 2kbit1 dataset) and show MByte/sec per sec

IDX_DATASET="/visus_dataset/visus.idx"

ZIP

$VISUS test-encoder-speed "$IDX_DATASET" zip
Ratio 6.96%
Encoding MByte(245) sec(48.006) MByte/sec(5.10353)
Decoding MByte(3520) sec(8.826) MByte/sec(398.822)

LZ4

$VISUS test-encoder-speed "$IDX_DATASET" lz4
Ratio 8.76%
Encoding MByte(1430)  sec(16.362) MByte/sec(87.3976) 
Decoding MByte(16322) sec(6.704)  MByte/sec(2434.67)

Test IDX read/write speed with zip/lz4 encoding and hzorder/row major layout

Execute several 'read' queries of dimension 512^3:

ZIP-HZORDER

$VISUS test-query-speed "$IDX_DATASET" --query-dim 512
box(572 1084 1210 1722 1246 1758)  access.rok(78647/102272)  access.rfail(0/0)  io.nopen(52/67.6203)  io.rbytes(4.96722/6.45932)  io.wbytes(0/0)
all done in 20.744 nqueries/sec(1.25337)

ZIP-ROWMAJOR

$VISUS test-query-speed "$IDX_DATASET" --query-dim 512
box(815 1327 1391 1903 128 640)  access.rok(96465/177325)  access.rfail(0/0)  io.nopen(52/95.5882)  io.rbytes(1.008/1.85294)  io.wbytes(0/0)
all done in 20.317 nqueries/sec(1.57504)

LZ4-HZORDER

$VISUS test-query-speed "$IDX_DATASET" --query-dim 512
box(572 1084 1210 1722 1246 1758)  access.rok(78647/109536)  access.rfail(0/0)  io.nopen(52/72.4234)  io.rbytes(11.3908/15.8646)  io.wbytes(0/0)
all done in 20.561 nqueries/sec(1.26453)
100%

LZ4-ROWMAJOR

$VISUS test-query-speed "$IDX_DATASET" --query-dim 512
box(435 947 908 1420 1494 2006)  access.rok(133110/403364)  access.rfail(0/0)  io.nopen(53/160.606)  io.rbytes(8.97475/27.1962)  io.wbytes(0/0)
all done in 20.184 nqueries/sec(2.17994)

Test IDX writing

Test 'write' speed of a query of dimension 512^3, disabling any write-lock (--disable-write-locks) since I'm using 1 process with 1 thread (i.e. no write collisions).

hzorder

$VISUS --disable-write-locks test-idx-slab-speed --dims "512 512 512" --num-slabs 128 --dtype int32 --hzorder
Wrote all slabs in 8.894sec

rowmajor

$VISUS --disable-write-locks test-idx-slab-speed --dims "512 512 512" --num-slabs 128 --dtype int32 --rowmajor
Wrote all slabs in 6.648sec 

Testing mod_visus access

Start visus server on atlantis using docker:

ssh [email protected]

cat <<EOF > ~/visus_datasets/visus.config
<visus><dataset name='2kbit1' url='file:///visus_datasets/2kbit1/visus.idx' permissions='public'/></visus>
EOF

docker run -it  --rm --env VISUS_DATASETS=/visus_datasets -v ~/visus_datasets:/visus_datasets -p 8080:80 visus/mod_visus-alpine 

On the client try serveral combinations of nconnections / num_queries_per_request :

cat <<EOF > run_test.sh
#!/bin/bash
for nconnections in 1 2 3 4
do
  for num_queries_per_request in 1 2 4 8 16 32 
  do 
    cat <<END > tmp.visus.config
<?xml version='1.0' ?>
<visus>
  <dataset name='TestModVisusAccess' url='http://atlantis.sci.utah.edu:8080/mod_visus?dataset=2kbit1' >
    <access type='ModVisusAccess' chmod='r' compression='zip'  nconnections="\$nconnections" num_queries_per_request="\$num_queries_per_request" />
  </dataset> 
</visus>
END	
    echo "*** Testing nconnections=\$nconnections num_queries_per_request=\$num_queries_per_request"
    $VISUS --visus-config tmp.visus.config import TestModVisusAccess --box "0 256 0 256 0 256" 
  done
done
EOF

chmod a+rx run_test.sh
./run_test.sh > log.txt

Inspect the logs

grep "Testing nconnections|All done" log.txt

nconnections=1 num_queries_per_request=1  seconds=132.141 
nconnections=1 num_queries_per_request=2  seconds= 67.935 
nconnections=1 num_queries_per_request=4  seconds= 35.009 
nconnections=1 num_queries_per_request=8  seconds= 18.513 
nconnections=1 num_queries_per_request=16 seconds= 10.765
nconnections=1 num_queries_per_request=32 seconds=  8.207 

nconnections=2 num_queries_per_request=1  seconds= 66.4 
nconnections=2 num_queries_per_request=2  seconds= 34.214 
nconnections=2 num_queries_per_request=4  seconds= 18.074 
nconnections=2 num_queries_per_request=8  seconds= 11.436 
nconnections=2 num_queries_per_request=16 seconds=  6.093 
nconnections=2 num_queries_per_request=32 seconds=  4.67 

nconnections=3 num_queries_per_request=1  seconds= 52.404 
nconnections=3 num_queries_per_request=2  seconds= 33.526 
nconnections=3 num_queries_per_request=4  seconds= 18.342 
nconnections=3 num_queries_per_request=8  seconds=  9.62 
nconnections=3 num_queries_per_request=16 seconds=  4.738 
nconnections=3 num_queries_per_request=32 seconds=  3.68 

nconnections=4 num_queries_per_request=1  seconds= 45.535 
nconnections=4 num_queries_per_request=2  seconds= 31.228 
nconnections=4 num_queries_per_request=4  seconds= 17.747 
nconnections=4 num_queries_per_request=8  seconds= 11.514 
nconnections=4 num_queries_per_request=16 seconds=  3.453 
nconnections=4 num_queries_per_request=32 seconds=  3.16