Tools

Responsible-AI-Widgets, It provides a collection of user interfaces that enable better understanding of AI systems, including three widgets demonstrating how to interpret models and assess their errors and fairness issues. https://github.com/microsoft/responsible-ai-widgets

Error Analysis, Error Analysis is a toolkit that enables you to identify cohorts with higher error rates and diagnose the root causes behind them in order to better inform your mitigation strategies, https://github.com/microsoft/responsible-ai-widgets

InterpretML, InterpretML is a package used for training interpretable machine learning models and explaining blackbox systems, https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability-aml

Fairlearn, Fairlearn empowers developers of AI systems to assess their systems' fairness and mitigate any negative impacts, https://github.com/fairlearn/fairlearn

AI Fairness 360, The AI Fairness 360 toolkit is an extensible open-source library containg techniques developed by the research community to help detect and mitigate bias in machine learning models, https://github.com/Trusted-AI/AIF360

Checklist and Agreement, This document can be used to guide the development of accountable, de-risked, respectful, secure, honest, and usable artificial intelligence (AI) systems with a diverse team aligned on shared ethics, https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=636620

FairSight, It is a viable fair decision making system to assist decision makers in achieving fair decision making through the machine learning workflow, https://github.com/ayong8/FairSight

Captum, Captum is an open source, extensible library for model interpretability built on PyTorch, https://captum.ai/

Incident Database, Intelligent systems are currently prone to unforeseen and often dangerous failures when they are deployed to the real world. This repository contains problems that the systems can experience in the real world, https://incidentdatabase.ai/