Seldon Core is an open source platform for deploying machine learning models on Kubernetes.
- Quick Start
- Example Components
- Deployment guide
- Latest Seldon Images
- Usage Reporting
Machine learning deployment has many challenges. Seldon Core intends to help with these challenges. Its high level goals are:
- Allow organisations to run and manage machine learning models built using any machine learning toolkit. Any model that can be run inside a Docker container can run in Seldon Core.
- Provide a production ready machine learning deployment system on top of Kubernetes and integrating well with other Cloud Native tools.
- Provide the tools to allow complex metrics, optimization and proper compliance of machine learning models in production.
- Optimize your models using multi-armed bandit solvers
- Run Outlier Detection models
- Get alerts on Concept Drift
- Provide black-box model explanations of running models
- Automatically expose REST and gRPC endpoints to allow business application to easily call your machine learning models.
- Handle full lifecycle management of the deployed model:
- Updating the runtime graph with no downtime
A Kubernetes Cluster. Kubernetes can be deployed into many environments, both on cloud and on-premise.
Read the overview to using seldon-core.
- Jupyter notebooks showing examples:
Seldon-core allows various types of components to be built and plugged into the runtime prediction graph. These include generic components such as models, routers, transformers and combiners. Some example components that are available as part of the project are:
- Models : example that illustrate simple machine learning models to help you build your own integrations
- AWS SageMaker
Seldon allows you to build up runtime inference graphs that provide powerful optimization and metrics for your running models. Example components that help ensure you provide a compliant production machine learning system are available:
- Multi-Armed Bandits
- Outlier Detection
- MNIST Average Combiner - ensembles sklearn and Tensorflow Models.
- IBM's Fabric for Deep Learning
- Istio and Seldon
- NVIDIA TensorRT and DL Inference Server
- Tensorflow Serving
- Intel OpenVINO
Follow the install guide for details on ways to install seldon onto your Kubernetes cluster.
- Wrap your runtime prediction model.
- Define your runtime inference graph in a seldon deployment custom resource.
- Deploy the graph.
- Serve Predictions.
- Advanced graphs showing the various types of runtime prediction graphs that can be built.
- Handling large gRPC messages. Showing how you can add annotations to increase the gRPC max message size.
- Handling REST timeouts. Showing how you can add annotations to set the REST (and gRPC) timeouts.
- Distributed Tracing
- Prediction API
- Seldon Deployment Custom Resource
- GDG DevFest 2018 - Intro to Seldon and Outlier Detection
- Open Source Model Management Roundup Polyaxon, Argo and Seldon
- Kubecon Europe 2018 - Serving Machine Learning Models at Scale with Kubeflow and Seldon
- Polyaxon, Argo and Seldon for model training, package and deployment in Kubernetes
- Manage ML Deployments Like A Boss: Deploy Your First AB Test With Sklearn, Kubernetes and Seldon-core using Only Your Web Browser & Google Cloud
- Using PyTorch 1.0 and ONNX with Fabric for Deep Learning
- AI on Kubernetes - O'Reilly Tutorial
- Scalable Data Science - The State of DevOps/MLOps in 2018
- Istio Weekly Community Meeting - Seldon-core with Istio
- Openshift Commons ML SIG - Openshift S2I Helping ML Deployment with Seldon-Core
- Overview of Openshift source-to-image use in Seldon-Core
- IBM Framework for Deep Learning and Seldon-Core
- CartPole game by Reinforcement Learning, a journey from training to inference
- Annotation based configuration.
- Notes for running in production.
- Helm configuration
- ksonnet configuration
Latest Seldon Images
|Seldon Core Wrapper||seldon-core-wrapper||0.1.3|
|Seldon Core JPMML||seldon-core-jpmml||0.0.1|
Tools that help the development of Seldon Core from anonymous usage.