To top off our 12 Days of MLOps Christmas, what better than our top webinars to while away those long days in the festive period. We’re immensely proud of the string of content from the Seldon Webinar series in 2023 and want to make sure you haven’t missed the best bits!Â
Here’s a rundown of the best sessions we ran in 2022, and you can watch them all on-demand on the Seldon website:Â
MetaOps: Metadata Operations For End-To-End Data & Machine Learning Platforms
Alejandro Saucedo, VP Engineering
About this webinar
In this session that premiered at KubeCon NA we dive into the importance of MetaOps to robust and reliable ML deployment.
Organisations are developing their ML capabilities, but pursuit of time-to-value can come at the cost of complexity and bottlenecks. Collecting, tracking and managing metadata is increasingly important to satisfy overarching compliance and architectural requirements on lineage, auditability, accountability and reproducibility.
What you’ll learn
- Challenges present in the metadata layer of large-scale systems
- Tooling, best practices and solutions to adopt for these challenges
- The rise of metadata management systems
- How to ensure long-term robustness of your platform ​​​​​
How can Financial Services Trust AI?
Ed Shee, Head of Developer Relations and Richard Jarvis, FSI Lead
About this webinar
Governance for machine learning is essential and must form the backbone of MLOps systems in order to satisfy financial, legal and ethical obligations. However, getting to this point is a key challenge for FS organisations. From a business stakeholder’s perspective, governance is likely to slow down model production and cost the business money. And from a data scientist’s perspective, governance is a lot of bureaucracy that negatively impacts their productivity.
Implementing governance of not just technology but also people and processes enables ML deployment to scale across an organisation and generate true value. In this session, FSI Lead Richard Jarvis will dive into the challenges he has seen from our customers and how to overcome them. There are a number of key tools and techniques that have proven to be successful in companies we have worked with, that help in reducing internal governance timelines by up to six months. Ed Shee, Head of Developer Relations, will join Richard and share what he has learnt from working closely with the wider developer community. They’ll also discuss the research from Seldon’s Engineering Director into the world of MLSecOps.
We’ll also look to the future and explore what financial services will face in terms of regulation, both what will need to be monitored and reported as well as how those needs can be met. The ML project of the future needs to have complete trust and transparency from internal and external stakeholders to succeed.
What you’ll learn
- What is the state of ML adoption and capabilities in FSI?
- What regulations are incoming that the industry needs to prepare for?
- What tools will become essential to mitigate governance risk?
- How do you create agile scale in legacy environments?
- What does this mean for your organisation’s future?
A Hands-On Intro to Drift DetectionÂ
Ashley Scillitoe, Data Science Research Engineer and Ed Shee, Head of Developer Relations
About this webinar
Although powerful, modern machine learning models can be sensitive. Seemingly subtle changes in data distribution can destroy the performance of otherwise state-of-the art models, which can be especially problematic when ML models are deployed in production. In this webinar, we will give a hands-on overview to drift detection; the discipline focused on detecting such changes. We will start by building an understanding of the ways in which drift can occur, and why it pays to detect it. We’ll then explore the anatomy of a drift detector, and learn how they can be used to detect drift in a principled manner.
We’ll work through a real-world example using Alibi Detect, an open-source Python library offering powerful algorithms for adversarial, outlier and drift detection. You’ll learn how to set up drift detectors, and deduce what type of drift is occurring. Since data can take on many forms, such as image, text or tabular data, you’ll explore how to use existing ML models to preprocess your data into a form suitable for drift detectors. Then, to gain further insights into the causes of drift, we will employ advanced detectors which are able to perform fine-grained attribution to instances and features. To assess whether model performance has been affected by drift, we’ll experiment with using model uncertainty-based detectors. Finally, we’ll use a novel context-aware drift detector. This takes in context (or conditioning) variables, allowing you to test for drift depending on context that is permitted to change. We’ll discuss how this functionality can be crucial in many real-life drift detection scenarios.
What you’ll learn
- The importance of drift detection
- How data drift can occur
- How to use Alibi to set up drift detectors and deduce what type of drift is occurring
- How these approaches can be applied and create value in real-world situations
Secure Machine Learning: The major security flaws in the ML lifecycle (and how to avoid them)
Alejandro Saucedo, Engineering Director
About this webinar
The operation and maintenance of large scale production machine learning systems has uncovered new challenges which require fundamentally different approaches to that of traditional software. The field of security in data & machine learning infrastructure has seen a growing rise in attention due to the critical risks being identified as it expands into more demanding real-world use-cases.
In this talk we will introduce the motivations and the importance of security in data & machine learning infrastructure through a set of practical examples showcasing “Flawed Machine Learning Security.” These “Flawed ML security examples are analogous to the annual “OWASP Top 10” report that highlights the top vulnerabilities in the web space, and will highlight common high risk touch points.
Throughout this session we will cover a practical example that will showcase how we can leverage the plethora of cloud native tooling to mitigate these critical security vulnerabilities. We will cover concepts such as role base access control for ML system artifacts and resources, encryption and access restrictions of data in transit and at rest, best practices for supply chain vulnerability mitigation, tools for vulnerability scans, and templates that practitioners can introduce to ensure best practices.
What you’ll learn
- The importance of security in data and ML infrastructure
- Common high risk touch points and vulnerabilities in the web space
- How to leverage tools to mitigate these critical security vulnerabilities
- Templates to ensure best practices
Open Source Explainability – Understanding Model DecisionsÂ
Alex Athorne, Research Engineer
About this webinar
Explainable AI, or XAI, is a rapidly expanding field of research that aims to supply methods for understanding model predictions. Alex will start by providing a general introduction to the field of explainability, introduce the Open Source Alibi library and focus on how it helps you to understand trained models. He will then explore the collection of algorithms provided by Alibi and the types of insight they each provide, looking at a broad range of datasets and models, discussing the pros and cons of each. The aim is to give the ML practitioner a clear idea of how Alibi can be used to justify, explore and enhance their use of ML, especially for models in deployment.
What you’ll learn
- The importance of explainability.
- How to use Alibi to understand trained models.
- When to use Alibi to enhance your models
- The effectiveness of Seldon Deploy across Serving, Monitoring and Explainability.
Accelerating Machine Learning in Kubernetes at Massive Scale
Alejandro Saucedo, Engineering Director
About this webinar
As the MLOps ecosystem continues to grow at break-neck speed. Identifying the right tools for high performance production machine learning can become overwhelming.
In this session we provide a hands-on guide on how you can productionize optimized machine learning models in scalable ecosystems using production-ready open source tools & frameworks that scale.
What you’ll learn
- How to leverage these tools for your own models
- The broad range of pre-trained models available
- How each of the tools in the stack interoperate throughout the production machine learning lifecycle with a practical example
- How to leverage ONNX Open Standard and Runtime for optimization
- Scaling and monitoring machine learning at scale with low complexity
Seldon x Noitso – A Tale of MLOps & Explainability: Do You Dare to Deploy a Credit Scoring Model to Prod
Thor Larsen, Data Scientist, Noitso
About this webinar
In this presentation, Thor Larsen, Data Scientist at Noitso, dives into his experience of implementing MLOps and Explainability and how his team uses Seldon to help provide their customers with quick and accurate credit ratings, scorecards and risk profiles. Hear first-hand the challenges and solutions to effective model deployment and moving time-to-value of models from days to hours.
Check out our recent case study with Noitso on how they used Seldon to power their machine learning operations.
There is plenty of data out there. In the Nordic financial sector, we are blessed with more data than most, and, in Noitso, we harness this data. However, creating and deploying machine learning models for a production setting is hard. There is also high risk, one mistake in prod will impact your bottom line when dealing out credit. Organisations need MLOps. This encompasses many important things; among others reproducibility, rolling-deployments and online monitoring of data drift and outliers. On top of all this, you also need compliant explanations of what your model is predicting. In the real world, this is true for all predictions based on machine learning.
What you’ll learn
- How to reduce time to deployment from days to hours
- How to manage reproducibility and rolling deployments
- Monitoring drift and outliers
Secure Machine Learning at Scale with MLSecOpsÂ
Alejandro Saucedo, Engineering Director
About this webinar
The operation and maintenance of large scale production machine learning systems has uncovered new challenges which have required fundamentally different approaches to that of traditional software. The area of security in MLOps has seen a rise in attention as machine learning infrastructure expands to further critical use cases across industry.
What you’ll learn
- The key security challenges that arise in production machine learning systems
- Best practices and frameworks that can be adopted to help mitigate security risks
- How to secure a machine learning model
- Which tools to secure production machine learning systems
- Best practices on a critical area of machine learning operations which is of paramount importance in production.
Protecting Your Machine Learning Against Drift
Oliver Cobb, Machine Learning Researcher, Seldon
About this webinar
Deployed machine learning models can fail spectacularly in response to seemingly benign changes to the underlying process being modelled. Concerningly, when labels are not available, as is often the case in deployment settings, this failure can occur silently and go unnoticed.
This talk will consist of a practical introduction to drift detection, the discipline focused on detecting such changes. We will start by building an understanding of how drift can occur, why it pays to detect it and how it can be detected in a principled manner. We will then discuss the practicalities and challenges around detecting it as quickly as possible in machine learning deployment settings where high dimensional and unlabelled data is arriving continuously. We will finish by demonstrating how the theory can be put into practice using the `alibi-detect` Python library.
What you’ll learn
- The common pitfalls of ML models
- What is drift detection
- How to use the Alibi-Detect python library
A CI/CD Framework for Production Machine Learning at Massive ScaleÂ
Alejandro Saucedo, ML Engineering Director
About this webinar
Managing production machine learning systems at scale has uncovered new challenges that have required fundamentally different approaches to that of traditional software engineering and data science. In this talk, Alejandro Saucedo, Engineering Director at Seldon provides key insights on MLOps, which often encompasses the concepts around monitoring, deployment, orchestration and continuous delivery for machine learning.
What you’ll learn
- What is a CI/CD Pipeline
- How to implement a CI/CD Pipeline
- How to scale your deployment for continuous delivery
Detecting and Handling DriftÂ
Ed Shee, Head of Developer Relations and Arnaud Van Looveren Head of Data Science Research
The machine learning lifecycle extends beyond the deployment stage. Monitoring deployed models is crucial for continued provision of high quality machine learning enabled services. Key areas include model performance and data monitoring, detecting outliers and data drift using statistical techniques. Join our latest webinar with Arnaud van Looveren and Ed Shee as they explore how to detect model drift, what methodologies exist for detecting drift, common mistakes make by organizations, and how to automate MLOps processes at scale to handle the issue.