This tutorial demonstrates how to perform real-time network throughput measurements on a Virtual Application Network using the iperf3 tool.
In this tutorial you will:
- deploy iperf3 servers in three separate clusters
- use the iperf3 server pods to run iperf3 client test instances
- create a Virtual Application Network which will enable the iperf3 client test instances to access iperf3 servers in any cluster
To complete this tutorial, do the following:
- Prerequisites
- Step 1: Set up the demo
- Step 2: Deploy the Virtual Application Network
- Step 3: Deploy the iperf3 servers
- Step 4: Create Skupper services for the Virtual Application Network
- Step 5: Bind the Skupper services to the deployment targets on the Virtual Application Network
- Step 6: Run benchmark tests across the clusters
- Cleaning up
- Next steps
-
The
kubectl
command-line tool, version 1.15 or later (installation guide) -
The
skupper
command-line tool, the latest version (installation guide)
The basis for this demonstration is to test communication performance across distributed clusters. You should have access to three independent clusters to observe performance over a Skupper Network. As an example, the three clusters might be composed of:
- A private cloud cluster running on your local machine (private1)
- Two public cloud clusters running in public cloud providers (public1 and public2)
-
On your local machine, make a directory for this tutorial and clone the example repo into it:
mkdir ~/iperf-demo cd ~/iperf-demo git clone https://github.com/skupperproject/skupper-example-iperf.git
-
Prepare the target clusters.
- On your local machine, log in to each cluster in a separate terminal session.
- In each cluster, create a namespace to use for the demo.
- In each cluster, set the kubectl config context to use the demo namespace (see kubectl cheat sheet)
On each cluster, using the skupper
tool, define the Virtual Application Network and the connectivity for the peer clusters.
-
In the terminal for the first public cluster, deploy the public1 application router. Create a connection token for connections from the public2 cluster and the private1 cluster:
skupper init --site-name public1 skupper token create private1-to-public1-token.yaml skupper token create public2-to-public1-token.yaml
-
In the terminal for the second public cluster, deploy the public2 application router. Create a connection token for connections from the private1 cluser and connect to the public1 cluster:
skupper init --site-name public2 skupper token create private1-to-public2-token.yaml skupper link create public2-to-public1-token.yaml
-
In the terminal for the private cluster, deploy the private1 application router. Connect to the public1 and public2 clusters;
skupper init --site-name private1 skupper link create private1-to-public1-token.yaml skupper link create private1-to-public2-token.yaml
After creating the application router network, deploy one iperf3 server to each of the clusters.
-
In the terminal for the private1 cluster, deploy the first iperf3 server:
kubectl apply -f ~/iperf-demo/skupper-example-iperf/deployment-iperf3-a.yaml
-
In the terminal for the public1 cluster, deploy the second iperf3 server:
kubectl apply -f ~/iperf-demo/skupper-example-iperf/deployment-iperf3-b.yaml
-
In the terminal for the public2 cluster, deploy the third iperf3 server:
kubectl apply -f ~/iperf-demo/skupper-example-iperf/deployment-iperf3-c.yaml
-
In the terminal for the private1 cluster, create the iperf3-server-a service:
skupper service create iperf3-server-a 5201
-
In the terminal for the public1 cluster, create the iperf3-server-b service:
skupper service create iperf3-server-b 5201
-
In the terminal for the public2 cluster, create the iperf3-server-c service:
skupper service create iperf3-server-c 5201
-
In each of the cluster terminals, verify that the services are present:
skupper service status
Note that each cluster depicts the target it provides.
-
In the terminal for the private1 cluster, expose the iperf3-server-a deployment:
skupper service bind iperf3-server-a deployment iperf3-server-a
-
In the terminal for the public1 cluster, annotate the iperf3-server-b deployment:
skupper service bind iperf3-server-b deployment iperf3-server-b
-
In the terminal for the public2 cluster, annotate the iperf3-server-c deployment:
skupper service bind iperf3-server-c deployment iperf3-server-c
-
In each of the cluster terminals, verify the services bind to the targets
skupper service status
Note that each cluster depicts the target it provides.
After deploying the iperf3 servers into the private and public cloud clusters, the application router network connects the servers and enables communications even though they are running in separate clusters.
-
In the terminal for the private1 cluster, attach to the iperf3-server-a container running in the private1 cluster and run the iperf3 client benchmark against each server:
kubectl exec $(kubectl get pod -l application=iperf3-server-a -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-a kubectl exec $(kubectl get pod -l application=iperf3-server-a -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-b kubectl exec $(kubectl get pod -l application=iperf3-server-a -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-c
-
In the terminal for the public1 cluster, attach to the iperf3-server-b container running in the public1 cluster and run the iperf3 client benchmark against each server:
kubectl exec $(kubectl get pod -l application=iperf3-server-b -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-a kubectl exec $(kubectl get pod -l application=iperf3-server-b -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-b kubectl exec $(kubectl get pod -l application=iperf3-server-b -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-c
-
In the terminal for the public2 cluster, attach to the iperf3-server-c container running in the public2 cluster and run the iperf3 client benchmark against each server:
kubectl exec $(kubectl get pod -l application=iperf3-server-c -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-a kubectl exec $(kubectl get pod -l application=iperf3-server-c -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-b kubectl exec $(kubectl get pod -l application=iperf3-server-c -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-c
Restore your cluster environment by returning the resources created in the demonstration and delete the skupper network:
-
In the terminal for the private1 cluster, delete the resources:
kubectl delete deployment iperf3-server-a skupper delete
-
In the terminal for the public1 cluster, delete the resources:
kubectl delete deployment iperf3-server-b skupper delete
-
In the terminal for the public2 cluster, delete the resources:
kubectl delete deployment iperf3-server-c skupper delete