Running on your machine or a single instance

Docker is required to run Cortex locally. In addition, your machine (or your Docker Desktop for Mac users) should have at least 8GB of memory if you plan to deploy large deep learning models.

Install the CLI

$ bash -c "$(curl -sS"

Continue to deploy an example below.

Running at scale on AWS

Docker and valid AWS credentials are required to run a Cortex cluster on AWS.

Spin up a cluster

See cluster configuration to learn how you can customize your cluster with cluster.yaml and see EC2 instances for an overview of several EC2 instance types.

To use GPU nodes, you may need to subscribe to the EKS-optimized AMI with GPU Support and file an AWS support ticket to increase the limit for your desired instance type.

# install the CLI on your machine
$ bash -c "$(curl -sS"
# provision infrastructure on AWS and spin up a cluster
$ cortex cluster up

Deploy an example

# clone the Cortex repository
$ git clone -b 0.17
# navigate to the TensorFlow iris classification example
$ cd cortex/examples/tensorflow/iris-classifier
# deploy the model
$ cortex deploy
# view the status of the api
$ cortex get --watch
# stream logs from the api
$ cortex logs iris-classifier
# get the api's endpoint
$ cortex get iris-classifier
# classify a sample
$ curl -X POST -H "Content-Type: application/json" \
-d '{ "sepal_length": 5.2, "sepal_width": 3.6, "petal_length": 1.4, "petal_width": 0.3 }' \
<API endpoint>
# delete the api
$ cortex delete iris-classifier

See uninstall if you'd like to spin down your cluster.