You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By default, the Cortex CLI is installed at `/usr/local/bin/cortex`. To install the executable elsewhere, export the `CORTEX_INSTALL_PATH` environment variable to your desired location before running the command above.
Copy file name to clipboardExpand all lines: docs/clients/python.md
+6-6
Original file line number
Diff line number
Diff line change
@@ -93,7 +93,7 @@ Deploy API(s) from a project directory.
93
93
94
94
**Arguments**:
95
95
96
-
-`api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/ for schema.
96
+
-`api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/ for schema.
97
97
-`project_dir` - Path to a python project.
98
98
-`force` - Override any in-progress api updates.
99
99
-`wait` - Streams logs until the APIs are ready.
@@ -115,7 +115,7 @@ Deploy a Realtime API.
115
115
116
116
**Arguments**:
117
117
118
-
-`api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/workloads/realtime-apis/configuration for schema.
118
+
-`api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/workloads/realtime-apis/configuration for schema.
119
119
-`handler` - A Cortex Handler class implementation.
120
120
-`requirements` - A list of PyPI dependencies that will be installed before the handler class implementation is invoked.
121
121
-`conda_packages` - A list of Conda dependencies that will be installed before the handler class implementation is invoked.
@@ -139,7 +139,7 @@ Deploy an Async API.
139
139
140
140
**Arguments**:
141
141
142
-
-`api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/workloads/async-apis/configuration for schema.
142
+
-`api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/workloads/async-apis/configuration for schema.
143
143
-`handler` - A Cortex Handler class implementation.
144
144
-`requirements` - A list of PyPI dependencies that will be installed before the handler class implementation is invoked.
145
145
-`conda_packages` - A list of Conda dependencies that will be installed before the handler class implementation is invoked.
@@ -162,7 +162,7 @@ Deploy a Batch API.
162
162
163
163
**Arguments**:
164
164
165
-
-`api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/workloads/batch-apis/configuration for schema.
165
+
-`api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/workloads/batch-apis/configuration for schema.
166
166
-`handler` - A Cortex Handler class implementation.
167
167
-`requirements` - A list of PyPI dependencies that will be installed before the handler class implementation is invoked.
168
168
-`conda_packages` - A list of Conda dependencies that will be installed before the handler class implementation is invoked.
@@ -184,7 +184,7 @@ Deploy a Task API.
184
184
185
185
**Arguments**:
186
186
187
-
-`api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/workloads/task-apis/configuration for schema.
187
+
-`api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/workloads/task-apis/configuration for schema.
188
188
-`task` - A callable class implementation.
189
189
-`requirements` - A list of PyPI dependencies that will be installed before the handler class implementation is invoked.
190
190
-`conda_packages` - A list of Conda dependencies that will be installed before the handler class implementation is invoked.
@@ -206,7 +206,7 @@ Deploy a Task API.
206
206
207
207
**Arguments**:
208
208
209
-
-`api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/workloads/realtime-apis/traffic-splitter/configuration for schema.
209
+
-`api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/workloads/realtime-apis/traffic-splitter/configuration for schema.
Copy file name to clipboardExpand all lines: docs/workloads/async/configuration.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ handler:
26
26
shell: <string> # relative path to a shell script for system package installation (default: dependencies.sh)
27
27
config: <string: value> # arbitrary dictionary passed to the constructor of the Handler class (optional)
28
28
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
29
-
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/python-handler-cpu:master, quay.io/cortexlabs/python-handler-gpu:master-cuda10.2-cudnn8, or quay.io/cortexlabs/python-handler-inf:master based on compute)
29
+
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/python-handler-cpu:0.35.0, quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda10.2-cudnn8, or quay.io/cortexlabs/python-handler-inf:0.35.0 based on compute)
30
30
env: <string: string> # dictionary of environment variables
31
31
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
32
32
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
@@ -55,8 +55,8 @@ handler:
55
55
signature_key: # name of the signature def to use for prediction (required if your model has more than one signature def)
56
56
config: <string: value> # arbitrary dictionary passed to the constructor of the Handler class (optional)
57
57
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
58
-
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/tensorflow-handler:master)
59
-
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:master, quay.io/cortexlabs/tensorflow-serving-gpu:master, or quay.io/cortexlabs/tensorflow-serving-inf:master based on compute)
58
+
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/tensorflow-handler:0.35.0)
59
+
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.35.0, quay.io/cortexlabs/tensorflow-serving-gpu:0.35.0, or quay.io/cortexlabs/tensorflow-serving-inf:0.35.0 based on compute)
60
60
env: <string: string> # dictionary of environment variables
61
61
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
62
62
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Copy file name to clipboardExpand all lines: docs/workloads/batch/configuration.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ handler:
19
19
path: <string> # path to a python file with a Handler class definition, relative to the Cortex root (required)
20
20
config: <string: value> # arbitrary dictionary passed to the constructor of the Handler class (can be overridden by config passed in job submission) (optional)
21
21
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
22
-
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/python-handler-cpu:master or quay.io/cortexlabs/python-handler-gpu:master-cuda10.2-cudnn8 based on compute)
22
+
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/python-handler-cpu:0.35.0 or quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda10.2-cudnn8 based on compute)
23
23
env: <string: string> # dictionary of environment variables
24
24
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
25
25
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
@@ -49,8 +49,8 @@ handler:
49
49
batch_interval: <duration> # the maximum amount of time to spend waiting for additional requests before running inference on the batch of requests
50
50
config: <string: value> # arbitrary dictionary passed to the constructor of the Handler class (can be overridden by config passed in job submission) (optional)
51
51
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
52
-
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/tensorflow-handler:master)
53
-
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:master or quay.io/cortexlabs/tensorflow-serving-gpu:master based on compute)
52
+
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/tensorflow-handler:0.35.0)
53
+
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.35.0 or quay.io/cortexlabs/tensorflow-serving-gpu:0.35.0 based on compute)
54
54
env: <string: string> # dictionary of environment variables
55
55
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
56
56
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Copy file name to clipboardExpand all lines: docs/workloads/batch/models.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -55,7 +55,7 @@ class Handler:
55
55
```
56
56
57
57
<!-- CORTEX_VERSION_MINOR -->
58
-
Cortex provides a `tensorflow_client` to your Handler class' constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/python/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Handler class, and your `handle_batch()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `handle_batch()` function as well.
58
+
Cortex provides a `tensorflow_client` to your Handler class' constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.35/python/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Handler class, and your `handle_batch()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `handle_batch()` function as well.
59
59
60
60
When multiple models are defined using the Handler's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`). There is also an optional third argument to specify the model version.
Cortex's base Docker images are listed below. Depending on the Cortex Handler and compute type specified in your API configuration, choose one of these images to use as the base for your Docker image:
The sample `Dockerfile` below inherits from Cortex's Python CPU serving image, and installs 3 packages. `tree` is a system package and `pandas` and `rdkit` are Python packages.
27
27
28
28
<!-- CORTEX_VERSION_BRANCH_STABLE -->
29
29
```dockerfile
30
30
# Dockerfile
31
31
32
-
FROM quay.io/cortexlabs/python-handler-cpu:master
32
+
FROM quay.io/cortexlabs/python-handler-cpu:0.35.0
33
33
34
34
RUN apt-get update \
35
35
&& apt-get install -y tree \
@@ -47,7 +47,7 @@ If you need to upgrade the Python Runtime version on your image, you can follow
0 commit comments