Last week, the Seldon team attended this year’s Open Data Science Conference (ODSC). Every day through June 8th-10th, we hosted a virtual booth where we introduced attendees to Seldon Deploy and were on hand to answer questions from ODSC’s esteemed list of guests. This included fellow companies shaping both the present and the future of AI, as well as professional data scientists from across the USA, Europe, Asia and beyond.
Those that visited Seldon’s booth were also all in with a chance of winning an Oculus Rift 2 VR Headset and we are pleased to reveal that the winner of our competition is:
Esteban Jenkins, Data and Analytics Engineer at Organon!
Introduction to Seldon Deploy: Deployment, Management and Monitoring of ML Models in Production
Another highlight of the conference was a talk from our Solutions Engineering Lead, Tom Farrand, who discussed how to deploy, manage and monitor machine learning models in production using Seldon Deploy.
Tom leads our solutions engineers, where he knits client needs with Seldon’s deployment capabilities to help our customers realise the potential of ML. In his demo, Tom showed off a variety of Seldon Deploy’s capabilities – including how teams can quickly deploy a model, access analytics showing a model’s resource usage and performance, and how to leverage Seldon Deploy’s model explanation, outlier detection, and drift detection capabilities.
Production Machine Learning Monitoring: Principles, Patterns and Techniques
On the final day of the conference, Seldon’s engineering director Alejandro Saucedo gave a talk to the ODSC audience on the principles, patterns and techniques that should underlie production-level machine learning (ML) monitoring.
While much emphasis is given to the development and testing side of ML models, in practice the lifecycle of an ML model only truly begins once it is put into a production environment. Alejandro spoke about the best practices, principles, patterns, and techniques around monitoring ML models that are in production so as to ensure they’re performing as intended.
Alejandro also discussed how teams could best leverage microservice monitoring techniques for their deployed ML models, along with certain tools and methods they could use to assess concept drift, outlier detection, and the explainability of their models.
After talking through the theory, Alejandro gave the ODSC attendees a hands-on example of training an image classification model from scratch, deploying it as a microservice in Kubernetes, before then introducing advanced monitoring capabilities as architectural patterns. Alejandro demonstrated the use of AI explainers, outlier detectors, concept drift detectors, and adversarial detectors, before then explaining the high-level architectural patterns that can be used to abstract these advanced techniques into scalable infrastructure.
Ultimately, by abstracting sophisticated monitoring tools into infrastructural components, Alejandro showed that it is possible for teams to develop standardised and scalable interfaces that can enable monitoring across hundreds or thousands of diverse ML models. Through this, organisations can finally achieve reliable scaled production-level ML deployment – and use the power of ML to solve their hard business problems.
Want to find out more about Seldon and our suite of products? Get a demo today