Which Predictor you use depends on how your model is exported:
​TensorFlow Predictor if your model is exported as a TensorFlow SavedModel
​ONNX Predictor if your model is exported in the ONNX format
​Python Predictor for all other cases
The response type of the predictor can vary depending on your requirements, see API responses below.
Cortex makes all files in the project directory (i.e. the directory which contains cortex.yaml
) available for use in your Predictor implementation. Python bytecode files (*.pyc
, *.pyo
, *.pyd
), files or folders that start with .
, and the api configuration file (e.g. cortex.yaml
) are excluded.
The following files can also be added at the root of the project's directory:
.cortexignore
file, which follows the same syntax and behavior as a .gitignore file.
.env
file, which exports environment variables that can be used in the predictor. Each line of this file must follow the VARIABLE=value
format.
For example, if your directory looks like this:
./my-classifier/├── cortex.yaml├── values.json├── predictor.py├── ...└── requirements.txt
You can access values.json
in your Predictor like this:
import json​class PythonPredictor:def __init__(self, config):with open('values.json', 'r') as values_file:values = json.load(values_file)self.values = values
# initialization code and variables can be declared here in global scope​class PythonPredictor:def __init__(self, config, python_client):"""(Required) Called once before the API becomes available. Performssetup such as downloading/initializing the model or downloading avocabulary.​Args:config (required): Dictionary passed from API configuration (ifspecified). This may contain information on where to downloadthe model and/or metadata.python_client (optional): Python client which is used to retrievemodels for prediction. This should be saved for use in predict().Required when `predictor.multi_model_reloading` is specified in the api configuration."""self.client = python_client # optional​def predict(self, payload, query_params, headers):"""(Required) Called once per request. Preprocesses the request payload(if necessary), runs inference, and postprocesses the inference output(if necessary).​Args:payload (optional): The request payload (see below for the possiblepayload types).query_params (optional): A dictionary of the query parameters usedin the request.headers (optional): A dictionary of the headers sent in the request.​Returns:Prediction or a batch of predictions."""pass​def post_predict(self, response, payload, query_params, headers):"""(Optional) Called in the background after returning a response.Useful for tasks that the client doesn't need to wait on beforereceiving a response such as recording metrics or storing results.​Note: post_predict() and predict() run in the same thread pool. Thesize of the thread pool can be increased by updating`threads_per_process` in the api configuration yaml.​Args:response (optional): The response as returned by the predict method.payload (optional): The request payload (see below for the possiblepayload types).query_params (optional): A dictionary of the query parameters usedin the request.headers (optional): A dictionary of the headers sent in the request."""pass​def load_model(self, model_path):"""(Optional) Called by Cortex to load a model when necessary.​This method is required when `predictor.multi_model_reloading`field is specified in the api configuration.​Warning: this method must not make any modification to the model'scontents on disk.​Args:model_path: The path to the model on disk.​Returns:The loaded model from disk. The returned object is whatself.client.get_model() will return."""pass
When explicit model paths are specified in the Python predictor's API configuration, Cortex provides a python_client
to your Predictor's constructor. python_client
is an instance of PythonClient that is used to load model(s) (it calls the load_model()
method of your predictor, which must be defined when using explicit model paths). It should be saved as an instance variable in your Predictor, and your predict()
function should call python_client.get_model()
to load your model for inference. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your predict()
function as well.
When multiple models are defined using the Predictor's models
field, the python_client.get_model()
method expects an argument model_name
which must hold the name of the model that you want to load (for example: self.client.get_model("text-generator")
). There is also an optional second argument to specify the model version.
For proper separation of concerns, it is recommended to use the constructor's config
parameter for information such as from where to download the model and initialization files, or any configurable model parameters. You define config
in your API configuration, and it is passed through to your Predictor's constructor.
Your API can accept requests with different types of payloads such as JSON
-parseable, bytes
or starlette.datastructures.FormData
data. Navigate to the API requests section to learn about how headers can be used to change the type of payload
that is passed into your predict
method.
Your predictor
method can return different types of objects such as JSON
-parseable, string
, and bytes
objects. Navigate to the API responses section to learn about how to configure your predictor
method to respond with different response codes and content-types.
Uses TensorFlow version 2.3.0 by default
class TensorFlowPredictor:def __init__(self, tensorflow_client, config):"""(Required) Called once before the API becomes available. Performssetup such as downloading/initializing a vocabulary.​Args:tensorflow_client (required): TensorFlow client which is used tomake predictions. This should be saved for use in predict().config (required): Dictionary passed from API configuration (ifspecified)."""self.client = tensorflow_client# Additional initialization may be done here​def predict(self, payload, query_params, headers):"""(Required) Called once per request. Preprocesses the request payload(if necessary), runs inference (e.g. by callingself.client.predict(model_input)), and postprocesses the inferenceoutput (if necessary).​Args:payload (optional): The request payload (see below for the possiblepayload types).query_params (optional): A dictionary of the query parameters usedin the request.headers (optional): A dictionary of the headers sent in the request.​Returns:Prediction or a batch of predictions."""pass​def post_predict(self, response, payload, query_params, headers):"""(Optional) Called in the background after returning a response.Useful for tasks that the client doesn't need to wait on beforereceiving a response such as recording metrics or storing results.​Note: post_predict() and predict() run in the same thread pool. Thesize of the thread pool can be increased by updating`threads_per_process` in the api configuration yaml.​Args:response (optional): The response as returned by the predict method.payload (optional): The request payload (see below for the possiblepayload types).query_params (optional): A dictionary of the query parameters usedin the request.headers (optional): A dictionary of the headers sent in the request."""pass
Cortex provides a tensorflow_client
to your Predictor's constructor. tensorflow_client
is an instance of TensorFlowClient that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your predict()
function should call tensorflow_client.predict()
to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your predict()
function as well.
When multiple models are defined using the Predictor's models
field, the tensorflow_client.predict()
method expects a second argument model_name
which must hold the name of the model that you want to use for inference (for example: self.client.predict(payload, "text-generator")
). There is also an optional third argument to specify the model version.
For proper separation of concerns, it is recommended to use the constructor's config
parameter for information such as configurable model parameters or download links for initialization files. You define config
in your API configuration, and it is passed through to your Predictor's constructor.
Your API can accept requests with different types of payloads such as JSON
-parseable, bytes
or starlette.datastructures.FormData
data. Navigate to the API requests section to learn about how headers can be used to change the type of payload
that is passed into your predict
method.
Your predictor
method can return different types of objects such as JSON
-parseable, string
, and bytes
objects. Navigate to the API responses section to learn about how to configure your predictor
method to respond with different response codes and content-types.
Uses ONNX Runtime version 1.4.0 by default
class ONNXPredictor:def __init__(self, onnx_client, config):"""(Required) Called once before the API becomes available. Performssetup such as downloading/initializing a vocabulary.​Args:onnx_client (required): ONNX client which is used to makepredictions. This should be saved for use in predict().config (required): Dictionary passed from API configuration (ifspecified)."""self.client = onnx_client# Additional initialization may be done here​def predict(self, payload, query_params, headers):"""(Required) Called once per request. Preprocesses the request payload(if necessary), runs inference (e.g. by callingself.client.predict(model_input)), and postprocesses the inferenceoutput (if necessary).​Args:payload (optional): The request payload (see below for the possiblepayload types).query_params (optional): A dictionary of the query parameters usedin the request.headers (optional): A dictionary of the headers sent in the request.​Returns:Prediction or a batch of predictions."""pass​def post_predict(self, response, payload, query_params, headers):"""(Optional) Called in the background after returning a response.Useful for tasks that the client doesn't need to wait on beforereceiving a response such as recording metrics or storing results.​Note: post_predict() and predict() run in the same thread pool. Thesize of the thread pool can be increased by updating`threads_per_process` in the api configuration yaml.​Args:response (optional): The response as returned by the predict method.payload (optional): The request payload (see below for the possiblepayload types).query_params (optional): A dictionary of the query parameters usedin the request.headers (optional): A dictionary of the headers sent in the request."""pass
Cortex provides an onnx_client
to your Predictor's constructor. onnx_client
is an instance of ONNXClient that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your predict()
function should call onnx_client.predict()
to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your predict()
function as well.
When multiple models are defined using the Predictor's models
field, the onnx_client.predict()
method expects a second argument model_name
which must hold the name of the model that you want to use for inference (for example: self.client.predict(model_input, "text-generator")
). There is also an optional third argument to specify the model version.
For proper separation of concerns, it is recommended to use the constructor's config
parameter for information such as configurable model parameters or download links for initialization files. You define config
in your API configuration, and it is passed through to your Predictor's constructor.
Your API can accept requests with different types of payloads such as JSON
-parseable, bytes
or starlette.datastructures.FormData
data. Navigate to the API requests section to learn about how headers can be used to change the type of payload
that is passed into your predict
method.
Your predictor
method can return different types of objects such as JSON
-parseable, string
, and bytes
objects. Navigate to the API responses section to learn about how to configure your predictor
method to respond with different response codes and content-types.
The type of the payload
parameter in predict(self, payload)
can vary based on the content type of the request. The payload
parameter is parsed according to the Content-Type
header in the request. Here are the parsing rules (see below for examples):
For Content-Type: application/json
, payload
will be the parsed JSON body.
For Content-Type: multipart/form-data
/ Content-Type: application/x-www-form-urlencoded
, payload
will be starlette.datastructures.FormData
(key-value pairs where the values are strings for text data, or starlette.datastructures.UploadFile
for file uploads; see Starlette's documentation).
For Content-Type: text/plain
, payload
will be a string. utf-8
encoding is assumed, unless specified otherwise (e.g. via Content-Type: text/plain; charset=us-ascii
)
For all other Content-Type
values, payload
will be the raw bytes
of the request body.
Here are some examples:
$ curl http://***.amazonaws.com/my-api \-X POST -H "Content-Type: application/json" \-d '{"key": "value"}'
When sending a JSON payload, the payload
parameter will be a Python object:
class PythonPredictor:def __init__(self, config):pass​def predict(self, payload):print(payload["key"]) # prints "value"
$ curl http://***.amazonaws.com/my-api \-X POST -H "Content-Type: application/octet-stream" \--data-binary @object.pkl
Since the Content-Type: application/octet-stream
header is used, the payload
parameter will be a bytes
object:
import pickle​class PythonPredictor:def __init__(self, config):pass​def predict(self, payload):obj = pickle.loads(payload)print(obj["key"]) # prints "value"
Here's an example if the binary data is an image:
from PIL import Imageimport io​class PythonPredictor:def __init__(self, config):pass​def predict(self, payload, headers):img = Image.open(io.BytesIO(payload)) # read the payload bytes as an imageprint(img.size)
$ curl http://***.amazonaws.com/my-api \-X POST \-F "text=@text.txt" \-F "object=@object.pkl" \-F "image=@image.png"
When sending files via form data, the payload
parameter will be starlette.datastructures.FormData
(key-value pairs where the values are starlette.datastructures.UploadFile
, see Starlette's documentation). Either Content-Type: multipart/form-data
or Content-Type: application/x-www-form-urlencoded
can be used (typically Content-Type: multipart/form-data
is used for files, and is the default in the examples above).
from PIL import Imageimport pickle​class PythonPredictor:def __init__(self, config):pass​def predict(self, payload):text = payload["text"].file.read()print(text.decode("utf-8")) # prints the contents of text.txt​obj = pickle.load(payload["object"].file)print(obj["key"]) # prints "value" assuming `object.pkl` is a pickled dictionary {"key": "value"}​img = Image.open(payload["image"].file)print(img.size) # prints the dimensions of image.png
$ curl http://***.amazonaws.com/my-api \-X POST \-d "key=value"
When sending text via form data, the payload
parameter will be starlette.datastructures.FormData
(key-value pairs where the values are strings, see Starlette's documentation). Either Content-Type: multipart/form-data
or Content-Type: application/x-www-form-urlencoded
can be used (typically Content-Type: application/x-www-form-urlencoded
is used for text, and is the default in the examples above).
class PythonPredictor:def __init__(self, config):pass​def predict(self, payload):print(payload["key"]) # will print "value"
$ curl http://***.amazonaws.com/my-api \-X POST -H "Content-Type: text/plain" \-d "hello world"
Since the Content-Type: text/plain
header is used, the payload
parameter will be a string
object:
class PythonPredictor:def __init__(self, config):pass​def predict(self, payload):print(payload) # prints "hello world"
The response of your predict()
function may be:
A JSON-serializable object (lists, dictionaries, numbers, etc.)
A string
object (e.g. "class 1"
)
A bytes
object (e.g. bytes(4)
or pickle.dumps(obj)
)
An instance of starlette.responses.Response​
It is possible to make requests from one API to another within a Cortex cluster. All running APIs are accessible from within the predictor at http://api-<api_name>:8888/predict
, where <api_name>
is the name of the API you are making a request to.
For example, if there is an api named text-generator
running in the cluster, you could make a request to it from a different API by using:
import requests​class PythonPredictor:def predict(self, payload):response = requests.post("http://api-text-generator:8888/predict", json={"text": "machine learning is"})# ...
Note that the autoscaling configuration (i.e. target_replica_concurrency
) for the API that is making the request should be modified with the understanding that requests will still be considered "in-flight" with the first API as the request is being fulfilled in the second API (during which it will also be considered "in-flight" with the second API).