Seldon Technologies Named a Representative Vendor in 2023 Gartner® Market Guide for AI TRiSM

Seldon Technologies has been recognized in the 2023 Gartner Market Guide for Artificial Intelligence (AI) Trust, Risk and Security Management (TRiSM). Seldon is listed as a Representative Explainability/Model Monitoring Vendor in the AI TRiSM market for its product Seldon Deploy Advanced. 

What is AI TRiSM? 

According to the Gartner report, “the AI TRiSM market comprises multiple software segments that ensure AI model governance, trustworthiness, fairness, reliability, security and data protection. AI TRiSM tools include solutions for: 

  • Model explainability and model monitoring
  • Privacy
  • ModelOps
  • AI application security

Together, use of solutions from these four categories helps organizations implement AI-specific trust, risk, and security management measures”.  Gartner states that “data and analytics leaders must use the capabilities described in their guide to improve model reliability, trustworthiness, fairness, privacy and security.

Why is AI TRiSM So Important? 

AI delivery is key to growth in so many industries, but creating a structure in which organizations can safely deploy this technology at scale is essential. When approaching ML projects and implementing infrastructure into a business, it is essential to consider the Trust, Risk and Security Management implications of this technology. 

In our opinion, it is clear that AI TRiSM is going to be a trending technology in the coming years. Gartner cites that “by 2026, organizations that operationalize AI transparency, trust, and security will see their AI models achieve a 50% result improvement in terms of adoption, business goals, and user acceptance.”*

AI is extremely powerful and has a huge amount of potential, however it must be handled with caution. Organizations are already investing in this technology, but achieving ROI is often tied up in getting deployment pipelines approved by the various risk, compliance and security checks and balances. 

Upholding standards of reliability, trustworthiness, fairness, privacy and security is essential, and we believe the framework of TRiSM as established by Gartner is a great starting point. 

How Gartner defines Explainability/Model Monitoring

According to Gartner the definition of explainable AI is “a set of capabilities that produce details or reasons that clarify a model’s functioning for a specific audience”.

Explainability:

  • Describes a model
  • Highlights a model’s strengths and weaknesses
  • Predicts a model’s likely behaviour
  • Identifies any potential biases in a model
  • Clarifies a model’s functioning to a specific audience to enable accuracy, fairness, accountability, stability and transparency in algorithmic decision making


Major Monitoring functions include:

  • Model fairness distributions and data drift checks during inferencing and production
  • Data leakage detection
  • Data poisoning detection
  • Compliance adherence to model data consumption
  • Measured data distribution shifts along with prior probability and covariate shifts

How should organizations use this framework? 

There are several recommendations that Gartner has for data and analytics leaders to “work with their colleague stakeholders responsible for AI trust, risk, and security management (AI TRiSM) to:

  • Mitigate AI risks by assigning organizational roles and responsibilities to manage AI TRiSM
  • Document each model’s intention, the extent to which bias must be controlled, and optimal business outcomes. Implement preset model intentions by using tools described in this Market Guide
  • Prevent compliance issues and support successful outcomes by investigating and stack ranking available privacy technologies
  • Initiate an AI application security program by first examining the whole AI application attack surface, including third-party applications with embedded AI. Start with adversarial attack resistance.

How Seldon Empowers AI TRiSM Implementation

Seldon’s advanced monitoring features and explainable AI (XAI) can help organizations who want to adopt AI TRiSM and implement its values into their model operations in order to continuously validate model integrity and reliability.

Monitor

As per the Gartner report, “Model monitoring primarily supports model performance”. We think it is often difficult to observe overall machine learning system performance, which can result in greater business risk. Seldon enables faster root cause analysis with advanced monitoring features that make it easier to debug ML systems. 

With Seldon, you can put into place adequate risk assessment and mitigation systems and you can also ensure you have high quality datasets to reduce risk and discriminatory outcomes. That increased level of transparency and accountability of ML projects means you can rely on a high level of robustness, security, and accuracy.

Explain

As per the Gartner report, “Explainability primarily supports model trustworthiness and predictability”. We believe, with explainable AI, you can meet AI risk, privacy, industry standards, and compliance regulations. You can also identify potential model bias by being able to explain a model’s decision making criteria. 

The increased trust that you can build with XAI empowers faster adoption of machine learning, and can also help you drive actionable insights so your team can ensure models are making decisions in an ethical and fair way.

Manage

Our enterprise platform also has management capabilities that help organizations manage their AI models more effectively and safely.

Our platform includes intuitive logging and alerting to make sure results are traceable. If mistakes occur during deployment, easily revert models back to their previous states with audit trails using GitOps facilitate reproducibility. 

Team collaboration can often become a bottleneck. Seldon enables seamless collaboration across teams, especially with high-stake deployments. We do this with advanced user management for granular policies and regulatory compliance of models. 

Another important feature when it comes to managing models in the context of highly regulated industries, is being able to document the whole process. Our platform provides you with detailed technical documentation for authorities to assess a ML model’s compliance with regulations.

Interested in applying AI TRiSM to your models?

It’s important to apply AI TRiSM before models are put into production. If you don’t, it could open the process to potential unnecessary risks. 

Gartner urges IT leaders to get familiar with forms of compromise, and to use AI TRiSM solutions so they can properly protect AI.*

If you are interested in gaining faster AI adoption, achieving your AI business goals and user acceptance–you need to be able to appropriately manage AI trust, risk and security.

Our team would love to speak to you about your unique ML challenges and how we can help you implement AI TRiSM. Get in touch and request a live product demo today!

*Gartner, “What It Takes to Make AI Safe and Effective”, October 19, 2022.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Contents