Including: V2 Python Wrapper Graduation, Advanced Monitoring Runtimes & OpenShift Operator
This release of Seldon Core v1.13.0 introduces groundbreaking improvements in several of its advanced monitoring and second-generation components, including:
- The new v2 Python Wrapper “MLServer”
- Explainer Runtime upgraded to Alibi Explain 0.6.4
- Detector Runtime upgraded to Alibi Detect 0.8.2
- Certified OpenShift Operator Update and Process
For more a detailed overview on fixes and changes you can view the release changelog.
V2 Python Wrapper – MLServer Upgrade
This month we were excited to announce the 1.0 release of our net generation “Python Language Wrapper” which introduces significant improvements and new features that have been identified from numerous years running the v1 Python wrapper in production. Some of the key features to highlight as part of the GA release include the following:
- Multi-model serving, letting users run multiple models within the same process.
- Ability to run inference in parallel for vertical scaling across multiple models through a pool of inference workers.
- Support for adaptive batching, to group inference requests together on the fly.
- Support for the standard V2 Inference Protocol on both the gRPC and REST flavors, which has been standardized and adopted by various model serving frameworks.
To learn more about the full set of features and the different machine learning frameworks supported by MLServer, check out the launch video below, as well as the MLServer docs. We would love to hear any thoughts, feedback and suggestions from the community!
Certified Openshift Operator
As part of our collaboration with Redhat the Seldon team has been able to perform the Certified OpenShift Operator release for Seldon Core v1.12.0. This includes the set of scanned and tested images that are optimized for running in the latest OpenShift v4.9 clusters. The release process introduced will now allow us to announce a previous-stable release so we will be looking to publish v1.13.0 on the next release. You are able to try out the OpenShift operator yourselves through the RedHat Ecosystem Catalog.
Upgraded Alibi Detect & Explain Runtimes
As part of this release of Seldon Core we have performed major upgrades on the Alibi Detect & Explain servers, introducing a broad range of new features. This includes the upgrade to Alibi Detect 0.8.2 and Alibi Explain 0.6.4.
Alibi Explain is an advanced machine learning explainability framework that enables practitioners to interpret the inference process of their machine learning models. Seldon has built an Explainer Server that provides a pre-packaged runtime that allows Albi users to deploy their explainer artifacts as fully-fledged microservices that can perform real-time explanations on deployed models. This introduces flexible architectural patterns that enable for interpretation of already deployed models at large scale. The updates in this release introduce minor fixes and improvements to the AnchorImage and IntegratedGradients explainability algorithms.
Alibi Detect is a state-of-the-art framework for outlier, adversarial and drift detection, and is used to power advanced monitoring for production machine learning systems at scale. Seldon Core has an integrated Alibi Detect server that provides an optimized runtime for the broad range of detector algorithms provided, allowing for flexible and rich configuration of artifacts created with the Alibi Detect framework. The upgrade on this version of Seldon Core introduces major improvements from version 0.7.x and 0.8.x of Alibi Detect which allow users to make use of the latest features.
You are able to test hands on examples for each of these new versions in their respective updated documentation.
Get Involved
The 1.13.0 release notes provide the full list of changes. We encourage users to get involved with the community and help provide further enhancements to Seldon Core. Join our Slack channel and come along to our community calls.