You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+16-16
Original file line number
Diff line number
Diff line change
@@ -2,27 +2,27 @@
2
2
3
3
## Introduction
4
4
5
-
FaaSFlow is a serverless workflow engine that enables efficient workflow execution in 2 ways: a worker-side workflow schedule pattern to reduce scheduling overhead, and a adaptive storage library to use local memory to transfer data between functions on the same node.
5
+
FaaSFlow is a serverless workflow engine that enables efficient workflow execution in 2 ways: a worker-side workflow schedule pattern to reduce scheduling overhead, and an adaptive storage library to use local memory to transfer data between functions on the same node.
6
6
7
7
## Hardware Depedencies and Private IP Address
8
8
9
9
1. In our experiment setup, we use aliyun ecs instance installed with Ubuntu 18.04 (ecs.g7.2xlarge, cores: 8, DRAM: 32GB) for each worker node, and a ecs.g6e.4xlarge(cores: 16, DRAM: 64GB) instance for database node installed with Ubuntu 18.04 and CouchDB.
10
10
11
-
2. Please save the private IP address of the storage node as the **<master_ip>**, and save the private IP addredd of other 7 worker nodes as the **<worker_ip>**.
11
+
2. Please save the private IP address of the storage node as the **<master_ip>**, and save the private IP address of the other 7 worker nodes as the **<worker_ip>**.
1. Reset `worker_address` configuration with your <worker_ip>:8000 on `src/grouping/node_info.yaml`. It will specify your worker address. The `scale_limit: 120`respresents the maximum container numbers that can be deployed in each 32GB memory instance, and it do not need any change by default.
17
+
1. Reset `worker_address` configuration with your <worker_ip>:8000 on `src/grouping/node_info.yaml`. It will specify your workers' addresses. The `scale_limit: 120`represents the maximum container numbers that can be deployed in each 32GB memory instance, and it does not need any change by default.
18
18
19
-
2. Reset `COUCHDB_URL` as `http://openwhisk:openwhisk@<master_ip>:5984/` in `src/container/config.py`, `src/workflow_manager/config.py`, `test/asplos/config.py`. It will specify the corresponding database storage you build previously.
19
+
2. Reset `COUCHDB_URL` as `http://openwhisk:openwhisk@<master_ip>:5984/` in `src/container/config.py`, `src/workflow_manager/config.py`, `test/asplos/config.py`. It will specify the corresponding database storage you built previously.
20
20
21
21
3. Then, clone the modified code into each node (8 nodes total).
22
22
23
-
4. On the storage node: Run `scripts/db_setup.bash`. This install docker, couchdb, some python packages, and build grouping results from 8 benchmarks.
23
+
4. On the storage node: Run `scripts/db_setup.bash`. It installs docker, CouchDB, some python packages, and build grouping results from 8 benchmarks.
24
24
25
-
5. On each worker node: Run `scripts/worker_setup.bash`. This install docker, redis, some python packages, and build docker images from 8 benchmarks.
25
+
5. On each worker node: Run `scripts/worker_setup.bash`. This install docker, Redis, some python packages, and build docker images from 8 benchmarks.
26
26
27
27
## WorkerSP Start-up
28
28
@@ -34,16 +34,16 @@ Enter `test/asplos/config.py` and define the `GATEWAY_ADDR` as `<master_ip>:7000
If you would like to run scripts under WorkerSP, you have finished all the operations and allowed to send invocations **by `run.py` scripts for all WorkerSP-based performance test**. Detailed scripts usage is introduced in [Run Experiment](#jumpexper).
37
+
If you would like to run scripts under WorkerSP, you have finished all the operations and are allowed to send invocations **by `run.py` scripts for all WorkerSP-based performance tests**. Detailed scripts usage is introduced in [Run Experiment](#jumpexper).
38
38
39
-
**Note:** We recommend to restart the `proxy.py` on each worker node and the `gateway.py` on the master node whenever you start the `run.py` script, to avoid any potential bugs.
39
+
**Note:** We recommend restarting the `proxy.py` on each worker node and the `gateway.py` on the master node whenever you start the `run.py` script, to avoid any potential bug.
40
40
41
41
## MasterSP Start-up
42
42
43
43
The following operations help to run scripts under MasterSP. Firstly, enter `src/workflow_manager`, change the configuration by `DATA_MODE = raw` and `CONTROL_MODE = MasterSP` in both 7 worker nodes and storage node. Then, restart the engine proxy on each worker node by the [proxy start](#jump) command, and restart the gateway on the storage node by the [gateway start](#jump) command.
44
44
45
45
Enter `src/workflow_manager/config.py`, and define the `MASTER_HOST` as `<master_ip>:8000`. Then,
46
-
start another proxy at the storage node as the virtual master node by the following command:
46
+
start another proxy on the storage node as the virtual master node by the following command:
47
47
```
48
48
python3 proxy.py <master_ip> 8000
49
49
```
@@ -52,7 +52,7 @@ If you would like to run scripts under MasterSP, you have finished all the opera
52
52
## <spanid="jumpexper">Run Experiment</span>
53
53
54
54
We provide some test scripts under `test/asplos`.
55
-
**<spanid="note">Note:**</span> We recommend to restart all `proxy.py` and `gateway.py` processes whenever you start the `run.py` script, to avoid any potential bugs. The restart will clear all background function containers and reclaim the memory space.
55
+
**<spanid="note">Note:**</span> We recommend to restart all `proxy.py` and `gateway.py` processes whenever you start the `run.py` script, to avoid any potential bug. The restart will clear all background function containers and reclaim the memory space.
56
56
57
57
### Scheduler Scalability: the overhead of graph scheduler when scale-up total nodes of one workflow
58
58
@@ -63,7 +63,7 @@ Directly run on the storage node:
63
63
64
64
### Component Overhead: overhead of one workflow engine
65
65
66
-
Start a proxy at any worker node (skip if you have already done in the above start-up) and get its pid. Then run it on any worker node:
66
+
Start a proxy on any worker node (skip if you have already done in the above start-up) and get its pid. Then run it on any worker node:
67
67
```
68
68
python3 run.py --pid=<pid>
69
69
```
@@ -100,9 +100,9 @@ Make the WorkerSP deployment, run it on the storage node:
100
100
Then make the MasterSP deployment, run it again with `--controlmode=MasterSP`.
101
101
102
102
103
-
### 99%-ile latency at 50MB/s, 6 request/min for all benchmark
103
+
### 99%-ile latency on 50MB/s, 6 request/min for all benchmark
104
104
105
-
1. Download wondershaper at`https://github.com/magnific0/wondershaper` to the storage node.
105
+
1. Download wondershaper from`https://github.com/magnific0/wondershaper` to the storage node.
106
106
107
107
2. Make the WorkerSP deployment, and run the following commands in your storage node. These will clear the previous bandwidth setting and set the network bandwidth to 50MB/s:
108
108
```
@@ -118,7 +118,7 @@ Then make the MasterSP deployment, run it again with `--controlmode=MasterSP`.
118
118
4. Make the MasterSP deployment, run it again with `--datamode=raw`
119
119
120
120
121
-
### 99%-ile latency at from 25MB/s-100MB/s, and with dfferent request/min for benchmark genome and video
121
+
### 99%-ile latency on 25MB/s-100MB/s, and with dfferent request/min for benchmark genome and video
122
122
123
123
1. Make the WorkerSP deployment, and run the following commands in your storage node. These will clear the previous bandwidth setting and set the network bandwidth to 25MB/s:
124
124
```
@@ -127,7 +127,7 @@ Then make the MasterSP deployment, run it again with `--controlmode=MasterSP`.
127
127
./wondershaper -a docker0 -u 204800 -d 204800
128
128
```
129
129
and then run the following commands on the storage node.
130
-
**Remember to restart all `proxy.py` and the `gateway.py` whenever you start the `run.py` script, to avoid any potential bugs.**
130
+
**Remember to restart all `proxy.py` and the `gateway.py` whenever you start the `run.py` script, to avoid any potential bug.**
@@ -146,4 +146,4 @@ and then run the following commands on the storage node.
146
146
```
147
147
3. Other configurations follow the same logic (`-u 614400 -d 614400` and `--bandwidth=75` corresponds to 75MB/s, `-u 819200 -d 819200` and `--bandwidth=100` corresponds to 100MB/s)
148
148
149
-
4. Make the MasterSP deployment, review the step 1 and 2, however with `--datamode=raw`. The evaluation of benchmark follows the same logic with `--workflow=video`.
149
+
4. Make the MasterSP deployment and review steps 1 and 2, however, with `--datamode=raw`. Then, the evaluation of benchmark follows the same logic with `--workflow=video`.
0 commit comments