The Products That are Right for Your Team

A lightweight inference server to deploy models

A software framework to deploy models into production

Seldon Core with added support and warranties for peace of mind

Price

Open Source


Price

Free for Non-Production

Price

On Request


Includes

Includes

Includes

+ Module

LLM Module

The next step in your AI evolution through effortless deployment and scalable innovation for LLMs Available with Seldon Core+

+ Add On

Alibi

Two powerful Python libraries for post-deployment monitoring to ensure better reliability in your applications. 

+ Add On

Seldon IQ

For teams wanting deep dive sessions and additional training on Seldon Core or included with Core+

Seldon Features

Support
Slack Community
Community Calls
Warranted Binaries
Base Support (response time SLA)Critical 24hrs
Enhanced Support
9-5 GMT or ET Support
Custom Support HoursAdd On
IQ Sessions3
Support Portal
Annual Health Check
Customer Success Manager
Support for Standard Configurations
Support for Complex Configurations Add On
Features
Lightweight Inference Serving
Pre-packaged Runtimes
BYO Custom Runtimes
Model Serving
LLM Serving and ManagementAdd on - LLM Module
Observability - Drift DetectionAdd on - Alibi Detect Module
Observability - Outlier DetectionAdd on - Alibi Detect Module
Intepretability - Prediction ExplanationsAdd on - Alibi Explain Module