Responsible-AI-Widgets, It provides a collection of user interfaces that enable better understanding of AI systems, including three widgets demonstrating how to interpret models and assess their errors and fairness issues.

Error Analysis, Error Analysis is a toolkit that enables you to identify cohorts with higher error rates and diagnose the root causes behind them in order to better inform your mitigation strategies,

InterpretML, InterpretML is a package used for training interpretable machine learning models and explaining blackbox systems,

Fairlearn, Fairlearn empowers developers of AI systems to assess their systems' fairness and mitigate any negative impacts,

AI Fairness 360, The AI Fairness 360 toolkit is an extensible open-source library containg techniques developed by the research community to help detect and mitigate bias in machine learning models,

Checklist and Agreement, This document can be used to guide the development of accountable, de-risked, respectful, secure, honest, and usable artificial intelligence (AI) systems with a diverse team aligned on shared ethics,

FairSight, It is a viable fair decision making system to assist decision makers in achieving fair decision making through the machine learning workflow,

Captum, Captum is an open source, extensible library for model interpretability built on PyTorch,

Incident Database, Intelligent systems are currently prone to unforeseen and often dangerous failures when they are deployed to the real world. This repository contains problems that the systems can experience in the real world,