Organizations

Non-governmental and civil society organizations working on HCAI

Adapted from Shneiderman, B., Human-Centered AI , Oxford University Press (2022, forthcoming).

There are hundreds of organizations in this category, so this brief listing only samples some of the prominent ones.

NIST Trustworthy & Responsible AI Resource Center (AIRC), The new Trustworthy & Responsible Artificial Intelligence Resource Center built by the National Institute of Standards and Technology will now serve as a repository for much of the current federal guidance on AI, featuring easy access to previously issued materials to help public and private entities alike create responsible AI systems. https://airc.nist.gov/

Underwriters Laboratories established in 1894, has been “working for a safer world” by “empowering trust”. They began with testing and certifying electrical devices and then branched out worldwide to evaluate and develop voluntary industry standards. Their vast international network has been successful in producing better products and services, so it seems natural for them to address HCAI. https://www.ul.com/about/mission

Brookings Institution, founded in 1916, is a Washington, DC non-profit public policy organization, which is home to an Artificial Intelligence and Energy Technology Initiative (AIET). They focus on governance issues by publishing reports and books, bringing together policy makers and researchers at conferences, and “seek to bridge the growing divide between industry, civil society, and policymakers.” https://www.brookings.edu/project/artificial-intelligence-and-emerging-technology-initiative/

Electronic Privacy Information Center (EPIC), founded in 1994, is a Washington, DC based public interest research that that focuses “public attention on emerging privacy and civil liberties issues and to protect privacy, freedom of expression, and democratic values in the information age.” They run conferences, offer public education, file amicus briefs, pursue litigation, and testify before Congress and governmental organizations. Their recent work has emphasized AI issues such as surveillance and algorithmic transparency. http://epic.org

Algorithmic Justice League, which stems from IT and Emory University, seeks to lead “a cultural movement towards equitable and accountable AI”. They combine “art and research to illuminate the social implications and harms of AI”. With funding from large foundations and individuals that have done influential work on demonstrating bias, especially for face recognition systems. There work productively led to algorithmic and training data improvements in leading corporate systems. https://www.ajlunited.org

AI Now Institute at New York University is an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence.” This institute emphasizes “four core domains: Rights & Liberties, Labor & Automation, Bias & Inclusion, Safety & Critical Infrastructure.” It supports research, symposia, and workshops to educate and examine “the social implications of AI”. https://ainowinstitute.org

Data & Society, an independent New York based non-profit that “studies the social implications of data-centric technologies & automation…. We produce original research on topics including AI and automation, the impact of technology on labor and health, and online disinformation.” https://datasociety.net

Foundation for Responsible Robotics is a Netherlands-based group whose tag line is: “Accountable innovation for the humans behind the robots”. They say their mission is “to shape a future of responsible (AI based) robotics design, development, use, regulation, and implementation. We do this by organizing and hosting events, publishing consultation documents, and through creating public-private collaborations.” https://responsiblerobotics.org

AI4ALL, an Oakland, CA based nonprofit works “for a future where diverse backgrounds, perspectives, and voices unlock AI’s potential to benefit humanity”. They sponsor education projects such as summer institutes in the U.S. and Canada for diverse high school and university students, especially women and minorities to promote AI for social good. http://ai-4-all.org

ForHumanity is a public charity, which examines and analyzes the downside risks associated with AI and automation, such as “their impact on jobs, society, our rights and our freedoms.” They believe that independent audit of AI systems, covering trust, ethics, bias, privacy and cybersecurity at the corporate and public-policy levels, is a crucial path to building an infrastructure of trust. They believe that “if we make safe and responsible artificial intelligence & automation profitable whilst making dangerous and irresponsible AI & automation costly, then all of humanity wins” https://www.forhumanity.center/

Future of Life Institute is a Boston-based charity working on AI, biotech, nuclear, and climate issues in the U.S., U.K., and European Union. They seek to “catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.” https://futureoflife.org

Center for AI and Digital Policy is part of the Michael Dukakis Institute for Leadership and Innovation. Their website says that they aim “to ensure that artificial intelligence and digital policies promote a better society, more fair, more just, and more accountable – a world where technology promotes broad social inclusion based on fundamental rights, democratic institutions, and the rule of law.” https://caidp.dukakis.org/

Professional organizations and research institutes working on HCAI

Adapted from Shneiderman, B., Human-Centered AI , Oxford University Press (2022, forthcoming).

There are hundreds of organizations in this category, so this brief listing only samples some of the prominent ones. A partial listing is at: https://en.wikipedia.org/wiki/Category:Artificial_intelligence_associations

Institute for Electrical and Electronics Engineers (IEEE) launched a global initiative for ethical considerations in the design of AI and autonomous systems. It’s an incubation space for new standards and solutions, certifications and codes of conduct, and consensus building for ethical implementation of intelligent technologies. https://standards.ieee.org/industry-connections/ec/ autonomous-systems.html

IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (2019) originates with the large professional engineering society, collected more than 200 people over 3 years to prepare an influential report: Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (IEEE, 2019) https://standards.ieee.org/ content/ieee-standards/en/industry-connections/ec/autonomous-systems.html https://ethicsinaction.ieee.org/

ACM , a professional society with 100,000 members working in the computing field have been active in developing principles and ethical frameworks for responsible computing. ACM’s Technical Policy Committee delivered a report with seven principles for algorithmic accountability and transparency (Garfinkel et al., 2017). https://www.acm.org/

Association for the Advancement of Artificial Intelligence (AAAI) is a “nonprofit scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. AAAI aims to promote research in, and responsible use of, artificial intelligence.” They run very successful conferences, symposia, and workshops, often in association with ACM, that bring researchers together to present new work and train newcomers to the field. https://www.aaai.org/

OECD AI Policy Observatory is a project of the Organization for Economic Co-operation and Development. They work with policy professionals “to consider the opportunities and challenges” in AI and to provide “a centre for the collection and sharing of evidence on AI, leveraging the OECD’s reputation for measurement methodologies and evidence-based analysis.” https://oecd.ai/about

Robotic Industries Association (RIA), founded in 1974, is a North American trade group that “drives innovation, growth, and safety in manufacturing and service industries through education, promotion, and advancement of robotics, related automation technologies, and companies delivering integrated solutions.” https://www.robotics.org

Machine Intelligence Research Institute (MIRI) is a research nonprofit studying the mathematical underpinnings of intelligent behavior. Their mission is to develop formal tools for the clean design and analysis of general-purpose AI systems, with the intent of making such systems safer and more reliable when they are developed. https://intelligence.org

Open AI is a San Francisco based research organization that “will attempt to directly build safe and beneficial Artificial General Intelligence (AGI)… that benefits all of humanity.” Their research team is supported by corporate investors, foundations, and private donations. https://openai.com

The Partnership on AI , established in 2016 by six of the largest technology companies, has more than 100 industry, academic, and other partners who “shape best practices, research, and public dialogue about AI’s benefits for people and society.” They funded the Partnership on AI, which “conducts research, organizes discussions, shares insights, provides thought leadership, consults with relevant third parties, responds to questions from the public and media, and creates educational material.” https://www.partnershiponai.org

Montreal AI Ethics Institute is an international, non-profit research institute dedicated to defining humanity’s place in a world increasingly characterized and driven by algorithms. Their website says “We do this by creating tangible and applied technical and policy research in the ethical, safe, and inclusive development of AI. We’re an international non-profit organization equipping citizens concerned about artificial intelligence to take action.” https://montrealethics.ai/

ELLIS unit Alicante is the only research non-profit in Spain devoted to scientific research in human-centered AI. ELLIS Alicante addresses three important research areas: 1) AI to understand us, by modeling human behavior using AI techniques both at the individual and aggregate levels, 2) AI that interacts with us, via the development of intelligent, interactive systems, and 3) AI that we trust, tackling the ethical challenges brought by AI. https://ellisalicante.org