Deploy machine learning applications without worrying about setting up infrastructure, managing dependencies, or orchestrating data pipelines.
Define your app: define your app using Python, TensorFlow, and PySpark.
$ cortex deploy: deploy end-to-end machine learning pipelines to AWS with one command.
Serve predictions: serve real time predictions via horizontally scalable JSON APIs.
Data ingestion: connect to your data warehouse and ingest data.
- kind: environmentname: devdata:type: csvpath: s3a://my-bucket/data.csvschema: [@col1, @col2, ...]
Data transformation: use custom Python and PySpark code to transform data.
- kind: transformed_columnname: col1_normalizedtransformer_path: normalize.py # Python / PySpark codeinput: @col1
Model training: train models with custom TensorFlow code.
- kind: modelname: my_modelestimator_path: dnn.py # TensorFlow codetarget_column: @label_colinput: [@col1_normalized, @col2_indexed, ...]hparams:hidden_units: [16, 8]training:batch_size: 32num_steps: 10000
Prediction serving: serve real time predictions via JSON APIs.
- kind: apiname: my-apimodel: @my_modelcompute:replicas: 3
Deployment: Cortex deploys your pipeline on scalable cloud infrastructure.
$ cortex deployIngesting data ...Transforming data ...Training models ...Deploying API ...Ready! https://abc.amazonaws.com/my-api
Machine learning pipelines as code: Cortex applications are defined using a simple declarative syntax that enables flexibility and reusability.
End-to-end machine learning workflow: Cortex spans the machine learning workflow from feature management to model training to prediction serving.
Built for the cloud: Cortex can handle production workloads and can be deployed in any AWS account in minutes.