cortex

module
v0.5.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 27, 2019 License: Apache-2.0

README


Get started: InstallTutorialDocsExamples

Learn more: WebsiteBlogSubscribeTwitterContact


Cortex deploys your machine learning models to your cloud infrastructure. You define your deployment with simple declarative configuration, Cortex containerizes your models, deploys them as scalable JSON APIs, and manages their lifecycle in production.

Cortex is actively maintained by Cortex Labs. We're a venture-backed team of infrastructure engineers and we're hiring.


How it works

Define your deployment using declarative configuration:

# cortex.yaml

- kind: api
  name: my-api
  model: s3://my-bucket/my-model.zip
  request_handler: handler.py
  compute:
    replicas: 4
    gpu: 2

Deploy to your cloud infrastructure:

$ cortex deploy

Deploying ...
Ready! https://amazonaws.com/my-api

Serve real time predictions via scalable JSON APIs:

$ curl -d '{"a": 1, "b": 2, "c": 3}' https://amazonaws.com/my-api

{ prediction: "def" }

Key features

  • Machine learning deployments as code: Cortex deployments are defined using declarative configuration.

  • Multi framework support (coming soon): Cortex will support TensorFlow, Keras, PyTorch, Scikit-learn, XGBoost, and more.

  • CPU / GPU support: Cortex can run inference on CPU or GPU infrastructure.

  • Scalability: Cortex can scale APIs to handle production workloads.

  • Rolling updates: Cortex updates deployed APIs without any downtime.

  • Cloud native: Cortex can be deployed on any AWS account in minutes.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL