Tools

RAI is a python library that is designed to help AI developers in various aspects of responsible AI development. RAI can easily be integrated into AI development projects and measures various metrics for an AI project during each phase of AI development, from data quality assessment to model selection based on performance, fairness and robustness criteria. https://github.com/cisco-open/ResponsibleAI

Counterfactual Logit Pairing (CLP), which improves a model’s robustness to sensitive attribute perturbations, and can positively influence a model’s stability, fairness, and safety. https://www.tensorflow.org/responsible_ai/model_remediation/counterfactual/guide/counterfactual_overview

Learning Interpretability Tool (LIT), an open-source platform for visualization and understanding of ML models, which can be used in Google to debug models, review model releases, identify fairness issues, and clean up datasets. https://pair-code.github.io/lit/

Data Cards, Data Cards are structured summaries of essential facts about various aspects of ML datasets needed by stakeholders across a project’s lifecycle for responsible AI development. https://sites.research.google/datacardsplaybook/

Responsible-AI-Widgets, It provides a collection of user interfaces that enable better understanding of AI systems, including three widgets demonstrating how to interpret models and assess their errors and fairness issues. https://github.com/microsoft/responsible-ai-widgets

Error Analysis, Error Analysis is a toolkit that enables you to identify cohorts with higher error rates and diagnose the root causes behind them in order to better inform your mitigation strategies, https://github.com/microsoft/responsible-ai-widgets

InterpretML, InterpretML is a package used for training interpretable machine learning models and explaining blackbox systems, https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability-aml

Fairlearn, Fairlearn empowers developers of AI systems to assess their systems' fairness and mitigate any negative impacts, https://github.com/fairlearn/fairlearn

AI Fairness 360, The AI Fairness 360 toolkit is an extensible open-source library containg techniques developed by the research community to help detect and mitigate bias in machine learning models, https://github.com/Trusted-AI/AIF360

Checklist and Agreement, This document can be used to guide the development of accountable, de-risked, respectful, secure, honest, and usable artificial intelligence (AI) systems with a diverse team aligned on shared ethics, https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=636620

FairSight, It is a viable fair decision making system to assist decision makers in achieving fair decision making through the machine learning workflow, https://github.com/ayong8/FairSight

Captum, Captum is an open source, extensible library for model interpretability built on PyTorch, https://captum.ai/

Incident Database, Intelligent systems are currently prone to unforeseen and often dangerous failures when they are deployed to the real world. This repository contains problems that the systems can experience in the real world, https://incidentdatabase.ai/

Responsible AI tools for TensorFlow, The TensorFlow ecosystem has a suite of tools and resources to achieve Responsible AI, including Fairness Indicators, What-If Tool, Language Interpretability Tool, Explainable AI Tool, TensorFlow Privacy Test, etc. https://www.tensorflow.org/responsible_ai

FairTorch, Aspiring to mitigate the unfairness of machine learning models. PyTorch implementation of parity loss as constraints function to realize the fairness of machine learning. https://github.com/wbawakate/fairtorch

TrustMLVis Browser, A Visual Survey in Enhancing Trust in Machine Learning (ML) Models with Visualization. https://trustmlvis.lnu.se/