-
Notifications
You must be signed in to change notification settings - Fork 2
Data Schema
This page describes the data schema used to structure the IoTBench repository.
IoTBench is a benchmarking framework based on five top-level components: an application profile, a protocol under test, the test environment, the global experimental setup, and the data that is collected across multiple runs for the same setup.
The profile and environment components are themselves built atop base components (input parameters, platform used, etc.).
All components are made of a minimal set of compulsory fields, plus an open list of optional ones. Currently, all components are described using commented YAML.
The figure below provides an overview of IoTBench.
- On the left-side the generic components, which are used in multiple experiments. They can be seen as 'inputs' of IoTBench, in a loose sense.
- On the right-side, the setup component describes a specific experiment instance (which profile, run by which protocol, on which environment). The collected data for a given setup is stored in a run component. They can be seen as 'outputs' of IoTBench.
The IoTBench repository is a collection of
These five top-components are described in more details below.
An IoT-Bench profile is composed of
- A profile ID
- A list of input parameters and their value
- A list of output metrics to be reported
- A list of observable metrics to be reported
Plus optionally
- A name
- A textual description
Commented example
profile_id: somepattern-4s # Unique ID for this profile
name: A given pattern, 4s # Brief textual description
description: > # Full textual description (markdown)
This is a *markdown* description of the profile.
Multiple lines are allowed, and so are:
* Item lists
* And basically any other markdown stuff
input-parameters: # Input parameter values. For now, textual
traffic-pattern: Some pattern # Pattern, in the space domain
traffic-period: 4s # Traffic period. Single value or range (X--Y)
traffic-payload: 8B # Traffic payload. Single value or range (X--Y)
observed-metrics: [] # Selected observed metrics (must be measured)
output-metrics: [ e2e-pdr, # Selected output metrics (must be measured)
e2e-latency,
power,
link-prr ]
An IoTBench protocol description contains
- A protocol ID
Plus optionally
- A name
- A textual description
- Link to source code
- Link to articles and/or other resources
Commented example
protocol_id: protocol1 # Unique ID
name: Protocol 1 # Brief textual description
description: > # Full markdown description
This is a *markdown* description of the protocol.
The same protocol running against the same profile will lead to different results on different environments. Thus, fairly comparing IoTBench results requires to clearly specify the environment used for a given experiment. Looking for a balance between usability and accuracy, IoTBench formalizes an environment as a platform and an environment category.
The environment category describes how the experiment is physically conducted. IoTBench formalizes three categories of environment: a testbed, a simulator, and an ad-hoc network.
The testbed and simulator categories are reserved for publically accessible environments (be it a physical testbed or a software). All other environments belong to the ad-hoc
category.
Thus, an IoTBench environment description contains
- An environment ID
- A platform ID
- The environment category (
testbed
,simulator
, orad-hoc
) - The environment ID, if defined
If the category is testbed
- A link to the testbed interface
If the category is simulator
- A link to the simulator software
- A version of the software
Plus optionally
- A name
- A textual description
Commented example
env_id: xpsetup1 # Unique ID
name: telosB_flocklab # Brief textual description
platform_id: tmote # Platform name
env_category: testbed # Test environment category
env_id: flocklab # Test environment ID if defined
description: > # Full markdown description
This is a *markdown* description of the experimental setup.
An IoTBench setup describes (in a formalized manner) a specific experiment instance. The result component contains
- A unique ID
- A profile ID
- A protocol ID
- An environment ID
Plus optionally
- A name
- A textual description, with a free label (e.g., high bandwidth) or by reporting the values of key parameters (e.g., LWB_ROUND_PERIOD = 1)
- A testbed configuration file (e.g., Flocklab XML test file)
- The complete list of node IDs (only relevant for testbeds)
- A simulation file, if you report results from Cooja or Renode or ...
- A source code repo URL with the commit ID from your test
- ... anything else relevant
An IoTBench run component contains the actual data that has been collected for one run on a given experiment setup.
It contains
- A run ID
- A timestamp
- A setup ID
- The values of all observed metrics
- The values of all output metrics
Plus optionally,
- A textual description
Commented example
run_id: data-col1\_lwb\_telosB\_flocklab\_01 # Unique ID
setup_id: data-col1\_lwb\_telosB\_flocklab # Setup ID
timestamp: 2018-05-08 20:17:32 # Date of the run
# For each output/observed metric,
# one entry per measurement. Depending
# on what the metric defines, can be
# e.g. per-node, per-packet, etc.
> Observed metrics
number-nodes: 8 # Number of nodes involved
> Output metrics
e2e-pdr: [ 99.7, 99.2, 98.7, 100, 995, 100, 98.2, 99.1 ]
e2e-latency: [ 120, 170, 182, 152, 50, 280, 160, 100 ]
power: [ 1.05, 1.54, 1.20, 0.98, 2.43, 1.89, 1.45, 1.20 ]
link-prr: [ 70, 22, 31, 74, 91, 88, 78, 90 ]
The previous section described the top-level components structuring the IoTBench repository. The profile and environment components are themselves built atop base components (input parameters, platform used, etc.), which are listed and described in this section.
A profile is a possibly partial assignment of concrete values to input parameters, and a precise definition of observed and output metrics to be measured.
To enable accurate comparison of results, IoTBench formalizes input parameters, observed metrics, and output metrics, their mapping to the profiles, and how the metrics should be computed.
An input parameter, for profiles.
[example to update] Commented example
uid: traffic-pattern # Unique ID
name: Traffic Pattern # Brief textual description
description: > # Full markdown description
The traffic pattern in space, e.g.:
* "Data Collection"
* "Many-to-many"
An observed metric, for profiles.
[example to update] Commented example
uid: ambient-interference # Unique ID
name: Ambient Interference # Brief textual description
description: > # Full markdown description
The measured ambient interference during the run.
An output metric, for profiles.
[example to update] Commented example
uid: e2e-pdr # Unique ID
name: PDR (%) # Brief textual description
description: > # Full markdown description
The end-to-end Packet Delivery Ratio (PDR),
at application-layer. In percent.
The same protocol running against the same profile will lead to different results on different environments. Thus, fairly comparing IoTBench results requires to clearly specify the environment used for a given experiment. Looking for a balance between usability and accuracy, IoTBench formalizes a setup as a platform and an environmentcategory.
A platform (or mote) is a physical device used for an experiment. The platform component contains
- A unique ID
- A link to the platform description
Plus optionally
-
A name
-
A textual description
-
The embedded radio chip
-
A link to the manufacturer if commercially avaiable
-
... etc.
Commented example
platform_id: tmote # Unique ID
radio: cc2420 # Embedded radio chip ID
name: Tmote Sky # Brief textual description