Skip to content

Commit 1309717

Browse files
committed
Update version to 0.35.0
1 parent 3d17e8b commit 1309717

30 files changed

+90
-90
lines changed

build/build-image.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.35.0
2323

2424
image=$1
2525

build/cli.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ set -euo pipefail
1919

2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

22-
CORTEX_VERSION=master
22+
CORTEX_VERSION=0.35.0
2323

2424
arg1=${1:-""}
2525
upload="false"

build/push-image.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717

1818
set -euo pipefail
1919

20-
CORTEX_VERSION=master
20+
CORTEX_VERSION=0.35.0
2121

2222
host=$1
2323
image=$2

dev/export_images.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ set -euo pipefail
2020
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"
2121

2222
# CORTEX_VERSION
23-
cortex_version=master
23+
cortex_version=0.35.0
2424

2525
# user set variables
2626
ecr_region=$1

dev/registry.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
# See the License for the specific language governing permissions and
1515
# limitations under the License.
1616

17-
CORTEX_VERSION=master
17+
CORTEX_VERSION=0.35.0
1818

1919
set -eo pipefail
2020

docs/clients/install.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,10 @@ pip install cortex
99
```
1010

1111
<!-- CORTEX_VERSION_README x2 -->
12-
To install or upgrade to a specific version (e.g. v0.34.0):
12+
To install or upgrade to a specific version (e.g. v0.35.0):
1313

1414
```bash
15-
pip install cortex==0.34.0
15+
pip install cortex==0.35.0
1616
```
1717

1818
To upgrade to the latest version:
@@ -25,8 +25,8 @@ pip install --upgrade cortex
2525

2626
<!-- CORTEX_VERSION_README x2 -->
2727
```bash
28-
# For example to download CLI version 0.34.0 (Note the "v"):
29-
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.34.0/get-cli.sh)"
28+
# For example to download CLI version 0.35.0 (Note the "v"):
29+
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.35.0/get-cli.sh)"
3030
```
3131

3232
By default, the Cortex CLI is installed at `/usr/local/bin/cortex`. To install the executable elsewhere, export the `CORTEX_INSTALL_PATH` environment variable to your desired location before running the command above.

docs/clients/python.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ Deploy API(s) from a project directory.
9393

9494
**Arguments**:
9595

96-
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/ for schema.
96+
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/ for schema.
9797
- `project_dir` - Path to a python project.
9898
- `force` - Override any in-progress api updates.
9999
- `wait` - Streams logs until the APIs are ready.
@@ -115,7 +115,7 @@ Deploy a Realtime API.
115115

116116
**Arguments**:
117117

118-
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/workloads/realtime-apis/configuration for schema.
118+
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/workloads/realtime-apis/configuration for schema.
119119
- `handler` - A Cortex Handler class implementation.
120120
- `requirements` - A list of PyPI dependencies that will be installed before the handler class implementation is invoked.
121121
- `conda_packages` - A list of Conda dependencies that will be installed before the handler class implementation is invoked.
@@ -139,7 +139,7 @@ Deploy an Async API.
139139

140140
**Arguments**:
141141

142-
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/workloads/async-apis/configuration for schema.
142+
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/workloads/async-apis/configuration for schema.
143143
- `handler` - A Cortex Handler class implementation.
144144
- `requirements` - A list of PyPI dependencies that will be installed before the handler class implementation is invoked.
145145
- `conda_packages` - A list of Conda dependencies that will be installed before the handler class implementation is invoked.
@@ -162,7 +162,7 @@ Deploy a Batch API.
162162

163163
**Arguments**:
164164

165-
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/workloads/batch-apis/configuration for schema.
165+
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/workloads/batch-apis/configuration for schema.
166166
- `handler` - A Cortex Handler class implementation.
167167
- `requirements` - A list of PyPI dependencies that will be installed before the handler class implementation is invoked.
168168
- `conda_packages` - A list of Conda dependencies that will be installed before the handler class implementation is invoked.
@@ -184,7 +184,7 @@ Deploy a Task API.
184184

185185
**Arguments**:
186186

187-
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/workloads/task-apis/configuration for schema.
187+
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/workloads/task-apis/configuration for schema.
188188
- `task` - A callable class implementation.
189189
- `requirements` - A list of PyPI dependencies that will be installed before the handler class implementation is invoked.
190190
- `conda_packages` - A list of Conda dependencies that will be installed before the handler class implementation is invoked.
@@ -206,7 +206,7 @@ Deploy a Task API.
206206

207207
**Arguments**:
208208

209-
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/workloads/realtime-apis/traffic-splitter/configuration for schema.
209+
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/workloads/realtime-apis/traffic-splitter/configuration for schema.
210210

211211

212212
**Returns**:

docs/clusters/advanced/self-hosted-images.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Clone the Cortex repo using the release tag corresponding to your version (which
1919
<!-- CORTEX_VERSION_README -->
2020

2121
```bash
22-
export CORTEX_VERSION=0.34.0
22+
export CORTEX_VERSION=0.35.0
2323
git clone --depth 1 --branch v$CORTEX_VERSION https://github.com/cortexlabs/cortex.git
2424
```
2525

docs/clusters/management/create.md

+26-26
Original file line numberDiff line numberDiff line change
@@ -96,30 +96,30 @@ The docker images used by the cluster can also be overridden. They can be config
9696
9797
<!-- CORTEX_VERSION_BRANCH_STABLE -->
9898
```yaml
99-
image_operator: quay.io/cortexlabs/operator:master
100-
image_controller_manager: quay.io/cortexlabs/controller-manager:master
101-
image_manager: quay.io/cortexlabs/manager:master
102-
image_downloader: quay.io/cortexlabs/downloader:master
103-
image_request_monitor: quay.io/cortexlabs/request-monitor:master
104-
image_async_gateway: quay.io/cortexlabs/async-gateway:master
105-
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:master
106-
image_metrics_server: quay.io/cortexlabs/metrics-server:master
107-
image_inferentia: quay.io/cortexlabs/inferentia:master
108-
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:master
109-
image_nvidia: quay.io/cortexlabs/nvidia:master
110-
image_fluent_bit: quay.io/cortexlabs/fluent-bit:master
111-
image_istio_proxy: quay.io/cortexlabs/istio-proxy:master
112-
image_istio_pilot: quay.io/cortexlabs/istio-pilot:master
113-
image_prometheus: quay.io/cortexlabs/prometheus:master
114-
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:master
115-
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:master
116-
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:master
117-
image_prometheus_dcgm_exporter: quay.io/cortexlabs/prometheus-dcgm-exporter:master
118-
image_prometheus_kube_state_metrics: quay.io/cortexlabs/prometheus-kube-state-metrics:master
119-
image_prometheus_node_exporter: quay.io/cortexlabs/prometheus-node-exporter:master
120-
image_kube_rbac_proxy: quay.io/cortexlabs/kube-rbac-proxy:master
121-
image_grafana: quay.io/cortexlabs/grafana:master
122-
image_event_exporter: quay.io/cortexlabs/event-exporter:master
123-
image_enqueuer: quay.io/cortexlabs/enqueuer:master
124-
image_kubexit: quay.io/cortexlabs/kubexit:master
99+
image_operator: quay.io/cortexlabs/operator:0.35.0
100+
image_controller_manager: quay.io/cortexlabs/controller-manager:0.35.0
101+
image_manager: quay.io/cortexlabs/manager:0.35.0
102+
image_downloader: quay.io/cortexlabs/downloader:0.35.0
103+
image_request_monitor: quay.io/cortexlabs/request-monitor:0.35.0
104+
image_async_gateway: quay.io/cortexlabs/async-gateway:0.35.0
105+
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:0.35.0
106+
image_metrics_server: quay.io/cortexlabs/metrics-server:0.35.0
107+
image_inferentia: quay.io/cortexlabs/inferentia:0.35.0
108+
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:0.35.0
109+
image_nvidia: quay.io/cortexlabs/nvidia:0.35.0
110+
image_fluent_bit: quay.io/cortexlabs/fluent-bit:0.35.0
111+
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.35.0
112+
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.35.0
113+
image_prometheus: quay.io/cortexlabs/prometheus:0.35.0
114+
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:0.35.0
115+
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:0.35.0
116+
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:0.35.0
117+
image_prometheus_dcgm_exporter: quay.io/cortexlabs/prometheus-dcgm-exporter:0.35.0
118+
image_prometheus_kube_state_metrics: quay.io/cortexlabs/prometheus-kube-state-metrics:0.35.0
119+
image_prometheus_node_exporter: quay.io/cortexlabs/prometheus-node-exporter:0.35.0
120+
image_kube_rbac_proxy: quay.io/cortexlabs/kube-rbac-proxy:0.35.0
121+
image_grafana: quay.io/cortexlabs/grafana:0.35.0
122+
image_event_exporter: quay.io/cortexlabs/event-exporter:0.35.0
123+
image_enqueuer: quay.io/cortexlabs/enqueuer:0.35.0
124+
image_kubexit: quay.io/cortexlabs/kubexit:0.35.0
125125
```

docs/workloads/async/configuration.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ handler:
2626
shell: <string> # relative path to a shell script for system package installation (default: dependencies.sh)
2727
config: <string: value> # arbitrary dictionary passed to the constructor of the Handler class (optional)
2828
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
29-
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/python-handler-cpu:master, quay.io/cortexlabs/python-handler-gpu:master-cuda10.2-cudnn8, or quay.io/cortexlabs/python-handler-inf:master based on compute)
29+
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/python-handler-cpu:0.35.0, quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda10.2-cudnn8, or quay.io/cortexlabs/python-handler-inf:0.35.0 based on compute)
3030
env: <string: string> # dictionary of environment variables
3131
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
3232
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
@@ -55,8 +55,8 @@ handler:
5555
signature_key: # name of the signature def to use for prediction (required if your model has more than one signature def)
5656
config: <string: value> # arbitrary dictionary passed to the constructor of the Handler class (optional)
5757
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
58-
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/tensorflow-handler:master)
59-
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:master, quay.io/cortexlabs/tensorflow-serving-gpu:master, or quay.io/cortexlabs/tensorflow-serving-inf:master based on compute)
58+
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/tensorflow-handler:0.35.0)
59+
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.35.0, quay.io/cortexlabs/tensorflow-serving-gpu:0.35.0, or quay.io/cortexlabs/tensorflow-serving-inf:0.35.0 based on compute)
6060
env: <string: string> # dictionary of environment variables
6161
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
6262
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)

docs/workloads/async/models.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ class Handler:
4444
<!-- CORTEX_VERSION_MINOR -->
4545

4646
Cortex provides a `tensorflow_client` to your handler's constructor. `tensorflow_client` is an instance
47-
of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/python/serve/cortex_internal/lib/client/tensorflow.py)
47+
of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.35/python/serve/cortex_internal/lib/client/tensorflow.py)
4848
that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as
4949
an instance variable in your handler class, and your `handle_async()` function should call `tensorflow_client.predict()` to make
5050
an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions

docs/workloads/batch/configuration.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ handler:
1919
path: <string> # path to a python file with a Handler class definition, relative to the Cortex root (required)
2020
config: <string: value> # arbitrary dictionary passed to the constructor of the Handler class (can be overridden by config passed in job submission) (optional)
2121
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
22-
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/python-handler-cpu:master or quay.io/cortexlabs/python-handler-gpu:master-cuda10.2-cudnn8 based on compute)
22+
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/python-handler-cpu:0.35.0 or quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda10.2-cudnn8 based on compute)
2323
env: <string: string> # dictionary of environment variables
2424
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
2525
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
@@ -49,8 +49,8 @@ handler:
4949
batch_interval: <duration> # the maximum amount of time to spend waiting for additional requests before running inference on the batch of requests
5050
config: <string: value> # arbitrary dictionary passed to the constructor of the Handler class (can be overridden by config passed in job submission) (optional)
5151
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
52-
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/tensorflow-handler:master)
53-
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:master or quay.io/cortexlabs/tensorflow-serving-gpu:master based on compute)
52+
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/tensorflow-handler:0.35.0)
53+
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.35.0 or quay.io/cortexlabs/tensorflow-serving-gpu:0.35.0 based on compute)
5454
env: <string: string> # dictionary of environment variables
5555
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
5656
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)

docs/workloads/batch/models.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ class Handler:
5555
```
5656

5757
<!-- CORTEX_VERSION_MINOR -->
58-
Cortex provides a `tensorflow_client` to your Handler class' constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/python/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Handler class, and your `handle_batch()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `handle_batch()` function as well.
58+
Cortex provides a `tensorflow_client` to your Handler class' constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.35/python/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Handler class, and your `handle_batch()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `handle_batch()` function as well.
5959

6060
When multiple models are defined using the Handler's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`). There is also an optional third argument to specify the model version.
6161

docs/workloads/debugging.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,10 @@ For example:
1212
cortex prepare-debug cortex.yaml iris-classifier
1313

1414
> docker run -p 9000:8888 \
15-
> -e "CORTEX_VERSION=0.34.0" \
15+
> -e "CORTEX_VERSION=0.35.0" \
1616
> -e "CORTEX_API_SPEC=/mnt/project/iris-classifier.debug.json" \
1717
> -v /home/ubuntu/iris-classifier:/mnt/project \
18-
> quay.io/cortexlabs/python-handler-cpu:0.34.0
18+
> quay.io/cortexlabs/python-handler-cpu:0.35.0
1919
```
2020

2121
Make a request to the api container:

docs/workloads/dependencies/images.md

+12-12
Original file line numberDiff line numberDiff line change
@@ -11,25 +11,25 @@ mkdir my-api && cd my-api && touch Dockerfile
1111
Cortex's base Docker images are listed below. Depending on the Cortex Handler and compute type specified in your API configuration, choose one of these images to use as the base for your Docker image:
1212

1313
<!-- CORTEX_VERSION_BRANCH_STABLE x10 -->
14-
* Python Handler (CPU): `quay.io/cortexlabs/python-handler-cpu:master`
14+
* Python Handler (CPU): `quay.io/cortexlabs/python-handler-cpu:0.35.0`
1515
* Python Handler (GPU): choose one of the following:
16-
* `quay.io/cortexlabs/python-handler-gpu:master-cuda10.0-cudnn7`
17-
* `quay.io/cortexlabs/python-handler-gpu:master-cuda10.1-cudnn7`
18-
* `quay.io/cortexlabs/python-handler-gpu:master-cuda10.1-cudnn8`
19-
* `quay.io/cortexlabs/python-handler-gpu:master-cuda10.2-cudnn7`
20-
* `quay.io/cortexlabs/python-handler-gpu:master-cuda10.2-cudnn8`
21-
* `quay.io/cortexlabs/python-handler-gpu:master-cuda11.0-cudnn8`
22-
* `quay.io/cortexlabs/python-handler-gpu:master-cuda11.1-cudnn8`
23-
* Python Handler (Inferentia): `quay.io/cortexlabs/python-handler-inf:master`
24-
* TensorFlow Handler (CPU, GPU, Inferentia): `quay.io/cortexlabs/tensorflow-handler:master`
16+
* `quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda10.0-cudnn7`
17+
* `quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda10.1-cudnn7`
18+
* `quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda10.1-cudnn8`
19+
* `quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda10.2-cudnn7`
20+
* `quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda10.2-cudnn8`
21+
* `quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda11.0-cudnn8`
22+
* `quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda11.1-cudnn8`
23+
* Python Handler (Inferentia): `quay.io/cortexlabs/python-handler-inf:0.35.0`
24+
* TensorFlow Handler (CPU, GPU, Inferentia): `quay.io/cortexlabs/tensorflow-handler:0.35.0`
2525

2626
The sample `Dockerfile` below inherits from Cortex's Python CPU serving image, and installs 3 packages. `tree` is a system package and `pandas` and `rdkit` are Python packages.
2727

2828
<!-- CORTEX_VERSION_BRANCH_STABLE -->
2929
```dockerfile
3030
# Dockerfile
3131

32-
FROM quay.io/cortexlabs/python-handler-cpu:master
32+
FROM quay.io/cortexlabs/python-handler-cpu:0.35.0
3333

3434
RUN apt-get update \
3535
&& apt-get install -y tree \
@@ -47,7 +47,7 @@ If you need to upgrade the Python Runtime version on your image, you can follow
4747
```Dockerfile
4848
# Dockerfile
4949

50-
FROM quay.io/cortexlabs/python-handler-cpu:master
50+
FROM quay.io/cortexlabs/python-handler-cpu:0.35.0
5151

5252
# upgrade python runtime version
5353
RUN conda update -n base -c defaults conda

0 commit comments

Comments
 (0)