In this page, you will find all the necessary code and documentation to run
FANcY's
hardware implementation in p4_14
. That includes the P4 code itself,
controllers, utility scripts and an orchestrator to automate different runs.
Note We have added a P4_16 implementation! You can find it here.
-
The
eval
folder containstofino-test.py
which is a script that is able to run all the different evaluation experiments. For that the script uses TCPsockets
to send commands to different components involved in the eval such as the receiver server and the Tofino switches. -
The
scripts
folder contains a set of utilities used by all the other scripts. -
The
p4src
folder contains the code offancy
and also some special switch code calledmiddle_switch.p4
, which we use as a middle switch between fancy's upstream and downstream state machines. This switch is in charge of adding some packet drops when instructed. Forfancy
, you will find two main programsfancy.p4
andfancy_zooming.p4
. At the time of writing and due to a bug in theSDE
we used to develop the code we had to split the two main components into two programs. -
The
control_plane
folder contains the control plane for our three programs. For the control plane we use therun_pd_rpc.py
and a custom python server that listens to commands sent by the orchestrator.
To run our case study, you will need to have a setup with the following:
-
Two Intel Tofino Switches. We used the Wedge 100BF-32X.
- The switches must be running the SDE version 9.2.0.
- System-wide python needs to be set to
python2
. I know this is not ideal, but the old SDE usespython2
. You can revert the change after. Set the symbolic link:sudo ln -sf $(which python2) /usr/bin/python
- You need to make sure your
run_pd_rpc.py
runs withpython2
, and thus its code starts with#!/usr/bin/python2
- You need to install scapy for
python2
:pip2 install scapy==2.4.3
. Usually it gets installed with the SDE, so probaly you won't have to do anything.
-
You need two servers. One sender and one receiver.
- Each server should have at least one 100Gbe NIC. We used Mellanox ConnectX-5 100Gbps NIC.
- You need to install
iperf-2.1.0
. We use a flag (--sum-only
), that is not available on theiperf
version you get fromapt-get
. To install it you can simply do the following:Install iperf version wget https://sourceforge.net/projects/iperf2/files/iperf-2.1.0-rc.tar.gz tar -xvf iperf-2.1.0-rc.tar.gz cd iperf-2.1.0-rc/ ./configure; make; sudo make install # make sure it has updated
- You need internet connectivity from the sender to the receiver and the Tofino switches. The orchestrator needs that to send commands. Thus, make sure that there is connectivty, and the ports you use are open.
In order to successfully run the evaluation you will need to setup your testbed as depicted in the figure below:
In the figure, you can see that we are using two servers, one sender and one
receiver. Each is connected two the first switch, tofino1
(name in our
setup), which is running one of the FANcY
programs. Note, that we connect them
to port 7 (176) and 8 (184) respectively. Then, tofino1
is connected to tofino4
using
3x100G cables:
- Main link: this is the link
fancy
uses to send traffic todst
throughtofino4
. And used bytofino4
to send traffic tosrc
when it comes formdst
. - Return link: the same but for traffic from and to
dst
. - Backup path: this is the link
fancy
uses to send traffic fromsrc
todst
when a failure is detected.
The second switch, tofino4
is running the program middle_switch.p4
. That is a special program that simply forwards
packets as described above, and as you can see in the figure. Furthermore, it can be configured to drop some %
of
packets. This configuration can be done at Runtime
through the controller.
Apart from the physical and internal port numbers, you can see that each port also has a port
name of the form PORTX_S
. Those port names are important and are hardcoded in the p4src/includes/constants.p4
file. In case you want to use different tofino ports, you must also update the
mappings there and recompile the program. Control plane code also depends on those constant #defines
.
⏳ Once you have all installed, running the experiments is relatively easy. The expected time to run the following experiments is 20 minutes. ⏳
-
Copy the tofino folder in the servers and tofino switches. Alternatively pull the entire repository. In the following example we have copied the content of the tofino folder to
~/fancy/
. -
Make sure the right
iperf
is installed in both servers. If not read the requirements section and installiperf
.$ iperf -v iperf version 2.1.0-rc (5 Jan 2021) pthreads
-
Make sure
scapy
is installed in the switches forpython2
.$ pip list | grep scapy scapy 2.4.3
-
Configure sender and receiver
IPs
andARP
table (just in case ARP messages are not being flooded). For that you can usescripts/server_setup.sh
utility. Do the same for both sender and receiver, but swap the ips and use the mac address of the other side. For example:cd ~/fancy/scripts/ ./server_setup.sh <intf> <src ip> <dst ip> <dst mac>
-
Make sure you have the
env variables
pointing toSDE 9.2.0
in both tofino switches.$ echo $SDE /data/bf-sde-9.2.0
-
Compile
fancy.p4
andfancy_zooming.p4
at the first switch (tofino1
).# first program ~/p4_build_new.sh -D SDE9 -D HARDWARE -D REROUTE --with-tofino --no-graphs ~/fancy/p4src/fancy.p4 # second program ~/p4_build_new.sh -D SDE9 -D HARDWARE --with-tofino --no-graphs ~/fancy/p4src/fancy_zooming.p4
Note that our
p4_build
script is calledp4_build_new
. And we are using several preprocessor (-D
) parameters. Also, we assume the code is placed at~/fancy/
. -
Compile
middle_switch.p4
at the second switch (tofino4
)~/p4_build_new.sh --with-tofino --no-graphs ~/fancy/p4src/middle_switch.p4
Now, we are almost all set to start the experiments! 🚀
Now we will run the experiments needed to get the case study figure 8
from the paper. To make the experiments simple we will use eval/tofino-test.py
which is an orchestrator that will make our life very easy.
In order for the orchestrator to know how to send commands to the other server and tofino switches, you will need to modify the contents of eval/server_mappings.py
with the IP (public or private) of your two servers and switches. You will find some default ports, but feel free to change them if needed.
remote_mappings = {
"tofino1": ("<tofino ip>", 5000),
"tofino4": ("<tofino ip>", 5001),
"sender": ("<sender server ip>", 31500),
"receiver": ("<receiver server ip>", 31500)
}
Everything is ready to start the experiments.
-
Start the command server at the receiver server.
cd ~/fancy/eval python2 command_server.py --port 31500
-
Start the
middle_switch
attofino4
.$SDE/run_switchd.sh -p middle_switch
-
Start control plane for the
middel_switch
attofino4
.cd ~/fancy/control_plane ~/old_tools/run_pd_rpc.py -p middle_switch -i controller_middle_switch.py
-
Start
fancy
attofino1
.$SDE/run_switchd.sh -p fancy
-
Start control plane for
fancy
attofino1
.cd ~/fancy/control_plane ~/old_tools/run_pd_rpc.py -p fancy -i controller_fancy.py
-
Start the orchestrator at the sender server.
cd ~/tofino-fancy/eval sudo python2 tofino-test.py --test_type dedicated --output_dir ~/dedicated_outputs/ --remote_server receiver
-
Wait 3 minutes, all the results will be stored at
~/dedicated_outputs/
-
You can keep the receiver server and
middle_switch
part untouched. -
Start
fancy_zooming
attofino1
.$SDE/run_switchd.sh -p fancy_zooming
-
Start control plane for
fancy_zooming
attofino1
.cd ~/fancy/control_plane ~/old_tools/run_pd_rpc.py -p fancy_zooming -i controller_fancy_zooming.py
-
Start the orchestrator at the sender server.
cd ~/fancy/eval/ sudo python2 tofino-test.py --test_type zooming --output_dir ~/zooming_outputs/ --remote_server receiver --sender_intf enp129s0f0
-
Wait 3 minutes, all the results will be stored at
~/zooming_outputs/
In order to get the plot out, you can move to the sigcomm evaluation page.