Cortex looks for a file named
dependencies.sh in the top level Cortex project directory (i.e. the directory which contains
cortex.yaml). For example:
./my-classifier/├── cortex.yaml├── predictor.py├── ...└── dependencies.sh
dependencies.sh is executed with
bash shell during the initialization of each replica (before installing Python packages in
conda-packages.txt). Typical use cases include installing required system packages to be used in your Predictor, building Python packages from source, etc.
Here is an example
dependencies.sh, which installs the
apt-get update && apt-get install -y tree
tree utility can now be called inside your
# predictor.pyimport subprocessclass PythonPredictor:def __init__(self, config):subprocess.run(["tree"])...
You can also build a custom Docker image for use in your APIs, e.g. to avoid installing dependencies during replica initialization.
Create a Dockerfile to build your custom image:
mkdir my-api && cd my-api && touch Dockerfile
Cortex's base Docker images are listed below. Depending on the Cortex Predictor and compute type specified in your API configuration, choose one of these images to use as the base for your Docker image:
Python Predictor (CPU):
Python Predictor (GPU):
cortexlabs/python-predictor-gpu-slim:0.19.0-cuda10.1 (also available in cuda10.0, cuda10.2, and cuda11.0)
Python Predictor (Inferentia):
TensorFlow Predictor (CPU, GPU, Inferentia):
ONNX Predictor (CPU):
ONNX Predictor (GPU):
Note: the images listed above use the
-slim suffix; Cortex's default API images are not
-slim, since they have additional dependencies installed to cover common use cases. If you are building your own Docker image, starting with a
-slim Predictor image will result in a smaller image size.
The sample Dockerfile below inherits from Cortex's Python CPU serving image, and installs 3 packages.
tree is a system package and
rdkit are Python packages.
# DockerfileFROM cortexlabs/python-predictor-cpu-slim:0.19.0RUN apt-get update \&& apt-get install -y tree \&& apt-get clean && rm -rf /var/lib/apt/lists/*RUN pip install --no-cache-dir pandas \&& conda install -y conda-forge::rdkit \&& conda clean -a
Create a repository to store your image:
# We create a repository in ECRexport AWS_ACCESS_KEY_ID="***"export AWS_SECRET_ACCESS_KEY="***"eval $(aws ecr get-login --no-include-email --region us-east-1)aws ecr create-repository --repository-name=org/my-api --region=us-east-1# take note of repository url
Build the image based on your Dockerfile and push it to its repository in ECR:
docker build . -t org/my-api:latest -t <repository_url>:latestdocker push <repository_url>:latest
Update your API configuration file to point to your image:
# cortex.yaml- name: my-api...predictor:image: <repository_url>:latest...
Note: for TensorFlow Predictors, two containers run together to serve predictions: one runs your Predictor code (
cortexlabs/tensorflow-predictor), and the other is TensorFlow serving to load the SavedModel (
cortexlabs/tensorflow-serving-cpu). There's a second available field
tensorflow_serving_image that can be used to override the TensorFlow Serving image. Both of the default serving images (
cortexlabs/tensorflow-serving-cpu) are based on the official TensorFlow Serving image (
tensorflow/serving). Unless a different version of TensorFlow Serving is required, the TensorFlow Serving image shouldn't have to be overridden, since it's only used to load the SavedModel and does not run your Predictor code.
Deploy your API as usual: