Skip to content

Commit 1bc4e2d

Browse files
authored
Update README.md
1 parent 59a2abc commit 1bc4e2d

File tree

1 file changed

+16
-16
lines changed

1 file changed

+16
-16
lines changed

README.md

+16-16
Original file line numberDiff line numberDiff line change
@@ -2,27 +2,27 @@
22

33
## Introduction
44

5-
FaaSFlow is a serverless workflow engine that enables efficient workflow execution in 2 ways: a worker-side workflow schedule pattern to reduce scheduling overhead, and a adaptive storage library to use local memory to transfer data between functions on the same node.
5+
FaaSFlow is a serverless workflow engine that enables efficient workflow execution in 2 ways: a worker-side workflow schedule pattern to reduce scheduling overhead, and an adaptive storage library to use local memory to transfer data between functions on the same node.
66

77
## Hardware Depedencies and Private IP Address
88

99
1. In our experiment setup, we use aliyun ecs instance installed with Ubuntu 18.04 (ecs.g7.2xlarge, cores: 8, DRAM: 32GB) for each worker node, and a ecs.g6e.4xlarge(cores: 16, DRAM: 64GB) instance for database node installed with Ubuntu 18.04 and CouchDB.
1010

11-
2. Please save the private IP address of the storage node as the **<master_ip>**, and save the private IP addredd of other 7 worker nodes as the **<worker_ip>**.
11+
2. Please save the private IP address of the storage node as the **<master_ip>**, and save the private IP address of the other 7 worker nodes as the **<worker_ip>**.
1212

1313
## Installation and Software Dependencies
1414

1515
Clone our code `https://github.com/lzjzx1122/FaaSFlow.git` and:
1616

17-
1. Reset `worker_address` configuration with your <worker_ip>:8000 on `src/grouping/node_info.yaml`. It will specify your worker address. The `scale_limit: 120` respresents the maximum container numbers that can be deployed in each 32GB memory instance, and it do not need any change by default.
17+
1. Reset `worker_address` configuration with your <worker_ip>:8000 on `src/grouping/node_info.yaml`. It will specify your workers' addresses. The `scale_limit: 120` represents the maximum container numbers that can be deployed in each 32GB memory instance, and it does not need any change by default.
1818

19-
2. Reset `COUCHDB_URL` as `http://openwhisk:openwhisk@<master_ip>:5984/` in `src/container/config.py`, `src/workflow_manager/config.py`, `test/asplos/config.py`. It will specify the corresponding database storage you build previously.
19+
2. Reset `COUCHDB_URL` as `http://openwhisk:openwhisk@<master_ip>:5984/` in `src/container/config.py`, `src/workflow_manager/config.py`, `test/asplos/config.py`. It will specify the corresponding database storage you built previously.
2020

2121
3. Then, clone the modified code into each node (8 nodes total).
2222

23-
4. On the storage node: Run `scripts/db_setup.bash`. This install docker, couchdb, some python packages, and build grouping results from 8 benchmarks.
23+
4. On the storage node: Run `scripts/db_setup.bash`. It installs docker, CouchDB, some python packages, and build grouping results from 8 benchmarks.
2424

25-
5. On each worker node: Run `scripts/worker_setup.bash`. This install docker, redis, some python packages, and build docker images from 8 benchmarks.
25+
5. On each worker node: Run `scripts/worker_setup.bash`. This install docker, Redis, some python packages, and build docker images from 8 benchmarks.
2626

2727
## WorkerSP Start-up
2828

@@ -34,16 +34,16 @@ Enter `test/asplos/config.py` and define the `GATEWAY_ADDR` as `<master_ip>:7000
3434
```
3535
python3 gateway.py <master_ip> 7000 (gateway start)
3636
```
37-
If you would like to run scripts under WorkerSP, you have finished all the operations and allowed to send invocations **by `run.py` scripts for all WorkerSP-based performance test**. Detailed scripts usage is introduced in [Run Experiment](#jumpexper).
37+
If you would like to run scripts under WorkerSP, you have finished all the operations and are allowed to send invocations **by `run.py` scripts for all WorkerSP-based performance tests**. Detailed scripts usage is introduced in [Run Experiment](#jumpexper).
3838

39-
**Note:** We recommend to restart the `proxy.py` on each worker node and the `gateway.py` on the master node whenever you start the `run.py` script, to avoid any potential bugs.
39+
**Note:** We recommend restarting the `proxy.py` on each worker node and the `gateway.py` on the master node whenever you start the `run.py` script, to avoid any potential bug.
4040

4141
## MasterSP Start-up
4242

4343
The following operations help to run scripts under MasterSP. Firstly, enter `src/workflow_manager`, change the configuration by `DATA_MODE = raw` and `CONTROL_MODE = MasterSP` in both 7 worker nodes and storage node. Then, restart the engine proxy on each worker node by the [proxy start](#jump) command, and restart the gateway on the storage node by the [gateway start](#jump) command.
4444

4545
Enter `src/workflow_manager/config.py`, and define the `MASTER_HOST` as `<master_ip>:8000`. Then,
46-
start another proxy at the storage node as the virtual master node by the following command:
46+
start another proxy on the storage node as the virtual master node by the following command:
4747
```
4848
python3 proxy.py <master_ip> 8000
4949
```
@@ -52,7 +52,7 @@ If you would like to run scripts under MasterSP, you have finished all the opera
5252
## <span id="jumpexper">Run Experiment</span>
5353

5454
We provide some test scripts under `test/asplos`.
55-
**<span id="note">Note:**</span> We recommend to restart all `proxy.py` and `gateway.py` processes whenever you start the `run.py` script, to avoid any potential bugs. The restart will clear all background function containers and reclaim the memory space.
55+
**<span id="note">Note:**</span> We recommend to restart all `proxy.py` and `gateway.py` processes whenever you start the `run.py` script, to avoid any potential bug. The restart will clear all background function containers and reclaim the memory space.
5656

5757
### Scheduler Scalability: the overhead of graph scheduler when scale-up total nodes of one workflow
5858

@@ -63,7 +63,7 @@ Directly run on the storage node:
6363

6464
### Component Overhead: overhead of one workflow engine
6565

66-
Start a proxy at any worker node (skip if you have already done in the above start-up) and get its pid. Then run it on any worker node:
66+
Start a proxy on any worker node (skip if you have already done in the above start-up) and get its pid. Then run it on any worker node:
6767
```
6868
python3 run.py --pid=<pid>
6969
```
@@ -100,9 +100,9 @@ Make the WorkerSP deployment, run it on the storage node:
100100
Then make the MasterSP deployment, run it again with `--controlmode=MasterSP`.
101101

102102

103-
### 99%-ile latency at 50MB/s, 6 request/min for all benchmark
103+
### 99%-ile latency on 50MB/s, 6 request/min for all benchmark
104104

105-
1. Download wondershaper at `https://github.com/magnific0/wondershaper` to the storage node.
105+
1. Download wondershaper from `https://github.com/magnific0/wondershaper` to the storage node.
106106

107107
2. Make the WorkerSP deployment, and run the following commands in your storage node. These will clear the previous bandwidth setting and set the network bandwidth to 50MB/s:
108108
```
@@ -118,7 +118,7 @@ Then make the MasterSP deployment, run it again with `--controlmode=MasterSP`.
118118
4. Make the MasterSP deployment, run it again with `--datamode=raw`
119119

120120

121-
### 99%-ile latency at from 25MB/s-100MB/s, and with dfferent request/min for benchmark genome and video
121+
### 99%-ile latency on 25MB/s-100MB/s, and with dfferent request/min for benchmark genome and video
122122

123123
1. Make the WorkerSP deployment, and run the following commands in your storage node. These will clear the previous bandwidth setting and set the network bandwidth to 25MB/s:
124124
```
@@ -127,7 +127,7 @@ Then make the MasterSP deployment, run it again with `--controlmode=MasterSP`.
127127
./wondershaper -a docker0 -u 204800 -d 204800
128128
```
129129
and then run the following commands on the storage node.
130-
**Remember to restart all `proxy.py` and the `gateway.py` whenever you start the `run.py` script, to avoid any potential bugs.**
130+
**Remember to restart all `proxy.py` and the `gateway.py` whenever you start the `run.py` script, to avoid any potential bug.**
131131

132132
```
133133
python3 run.py --bandwidth=25 --datamode=optimized --workflow=genome
@@ -146,4 +146,4 @@ and then run the following commands on the storage node.
146146
```
147147
3. Other configurations follow the same logic (`-u 614400 -d 614400` and `--bandwidth=75` corresponds to 75MB/s, `-u 819200 -d 819200` and `--bandwidth=100` corresponds to 100MB/s)
148148

149-
4. Make the MasterSP deployment, review the step 1 and 2, however with `--datamode=raw`. The evaluation of benchmark follows the same logic with `--workflow=video`.
149+
4. Make the MasterSP deployment and review steps 1 and 2, however, with `--datamode=raw`. Then, the evaluation of benchmark follows the same logic with `--workflow=video`.

0 commit comments

Comments
 (0)