A curated list of awesome academic research, books, code of ethics, courses, data sets, frameworks, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible, Trustworthy, and Human-Centered AI.
AI governance is a system of rules, processes, frameworks, and tools within an organization to ensure the ethical and responsible development of AI.
Human-Centered Artificial Intelligence (HCAI) is an approach to AI development that prioritizes human users' needs, experiences, and well-being.
When we refer to a “system,” we are speaking both broadly about a fully functional structure and its discrete structural elements. To be considered Open Source, the requirements are the same, whether applied to a system, a model, weights and parameters, or other structural elements.
An Open Source AI is an AI system made available under terms and in a way that grant the freedoms1 to:
- Use the system for any purpose and without having to ask for permission.
- Study how the system works and inspect its components.
- Modify the system for any purpose, including to change its output.
- Share the system for others to use with or without modifications, for any purpose.
Responsible AI (RAI) refers to the development, deployment, and use of artificial intelligence (AI) systems in ways that are ethical, transparent, accountable, and aligned with human values.
Responsible AI frameworks often encompass guidelines, principles, and practices that prioritize fairness, safety, and respect for individual rights.
Trustworthy AI (TAI) refers to artificial intelligence systems designed and deployed to be transparent, robust and respectful of data privacy.
AI is a transformative technology prone to reshape industries, yet it requires careful governance to balance the benefits of automation and insight with protections against unintended social, economic, and security impacts. You can read more about the current wave here.
- Academic Research
- Books
- Code of Ethics
- Courses
- Data Sets
- Frameworks
- Institutes
- Newsletters
- Principles
- Podcasts
- Reports
- Tools
- Regulations
- Standards
- Citing this repository
- Agarwal, C., Krishna, S., Saxena, E., Pawelczyk, M., Johnson, N., Puri, I., ... & Lakkaraju, H. (2022). Openxai: Towards a transparent evaluation of model explanations. Advances in Neural Information Processing Systems, 35, 15784-15799. Article
- Liesenfeld, A., and Dingemanse, M. (2024). Rethinking Open Source Generative AI: Open-Washing and the EU AI Act. In The 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24). Rio de Janeiro, Brazil: ACM. Article Benchmark
- Schwartz, R., Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., & Hall, P. (2022). Towards a standard for identifying and managing bias in artificial intelligence (Vol. 3, p. 00). US Department of Commerce, National Institute of Standards and Technology. Article
NIST
- D'Amour, A., Heller, K., Moldovan, D., Adlam, B., Alipanahi, B., Beutel, A., ... & Sculley, D. (2022). Underspecification presents challenges for credibility in modern machine learning. Journal of Machine Learning Research, 23(226), 1-61. Article
Google
- Ackerman, S., Dube, P., Farchi, E., Raz, O., & Zalmanovici, M. (2021, June). Machine learning model drift detection via weak data slices. In 2021 IEEE/ACM Third International Workshop on Deep Learning for Testing and Testing for Deep Learning (DeepTest) (pp. 1-8). IEEE. Article
IBM
- Ackerman, S., Raz, O., & Zalmanovici, M. (2020, February). FreaAI: Automated extraction of data slices to test machine learning models. In International Workshop on Engineering Dependable and Secure Machine Learning Systems (pp. 67-83). Cham: Springer International Publishing. Article
IBM
- Dhurandhar, A., Chen, P. Y., Luss, R., Tu, C. C., Ting, P., Shanmugam, K., & Das, P. (2018). Explanations based on the missing: Towards contrastive explanations with pertinent negatives. Advances in neural information processing systems, 31. Article
University of Michigan
IBM Research
- Dhurandhar, A., Shanmugam, K., Luss, R., & Olsen, P. A. (2018). Improving simple models with confidence profiles. Advances in Neural Information Processing Systems, 31. Article
IBM Research
- Gurumoorthy, K. S., Dhurandhar, A., Cecchi, G., & Aggarwal, C. (2019, November). Efficient data representation by selecting prototypes with importance weights. In 2019 IEEE International Conference on Data Mining (ICDM) (pp. 260-269). IEEE. Article
Amazon Development Center
IBM Research
- Hind, M., Wei, D., Campbell, M., Codella, N. C., Dhurandhar, A., Mojsilović, A., ... & Varshney, K. R. (2019, January). TED: Teaching AI to explain its decisions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 123-129)Article
IBM Research
- Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30. Article, Github
University of Washington
- Luss, R., Chen, P. Y., Dhurandhar, A., Sattigeri, P., Zhang, Y., Shanmugam, K., & Tu, C. C. (2021, August). Leveraging latent features for local explanations. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining (pp. 1139-1149). Article
IBM Research
University of Michigan
- Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). "Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144). Article, Github
University of Washington
- Wei, D., Dash, S., Gao, T., & Gunluk, O. (2019, May). Generalized linear rule models. In International conference on machine learning (pp. 6687-6696). PMLR. Article
IBM Research
- Contrastive Explanations Method with Monotonic Attribute Functions (Luss et al., 2019)
- Boolean Decision Rules via Column Generation (Light Edition) (Dash et al., 2018)
IBM Research
- Towards Robust Interpretability with Self-Explaining Neural Networks (Alvarez-Melis et al., 2018)
MIT
- Caton, S., & Haas, C. (2024). Fairness in machine learning: A survey. ACM Computing Surveys, 56(7), 1-38. Article
- Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2), 153-163. Article
- Coston, A., Mishler, A., Kennedy, E. H., & Chouldechova, A. (2020, January). Counterfactual risk assessments, evaluation, and fairness. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 582-593). Article
- Jesus, S., Saleiro, P., Jorge, B. M., Ribeiro, R. P., Gama, J., Bizarro, P., & Ghani, R. (2024). Aequitas Flow: Streamlining Fair ML Experimentation. arXiv preprint arXiv:2405.05809. Article
- Saleiro, P., Kuester, B., Hinkson, L., London, J., Stevens, A., Anisfeld, A., ... & Ghani, R. (2018). Aequitas: A bias and fairness audit toolkit. arXiv preprint arXiv:1811.05577. Article
- Vasudevan, S., & Kenthapadi, K. (2020, October). Lift: A scalable framework for measuring fairness in ml applications. In Proceedings of the 29th ACM international conference on information & knowledge management (pp. 2773-2780). Article
LinkedIn
- Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Iii, H. D., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86-92. Article
Google
- Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., ... & Gebru, T. (2019, January). Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency (pp. 220-229). Article
Google
- Pushkarna, M., Zaldivar, A., & Kjartansson, O. (2022, June). Data cards: Purposeful and transparent dataset documentation for responsible ai. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1776-1826). Article
Google
- Rostamzadeh, N., Mincu, D., Roy, S., Smart, A., Wilcox, L., Pushkarna, M., ... & Heller, K. (2022, June). Healthsheet: development of a transparency artifact for health datasets. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1943-1961). Article
Google
- Saint-Jacques, G., Sepehri, A., Li, N., & Perisic, I. (2020). Fairness through Experimentation: Inequality in A/B testing as an approach to responsible design. arXiv preprint arXiv:2002.05819. Article
LinkedIn
- Lacoste, A., Luccioni, A., Schmidt, V., & Dandres, T. (2019). Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700. Article
- P. Li, J. Yang, M. A. Islam, S. Ren, (2023) Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models, arXiv:2304.03271 Article
- Parcollet, T., & Ravanelli, M. (2021). The energy and carbon footprint of training end-to-end speech recognizers. Article
- Patterson, D., Gonzalez, J., Le, Q., Liang, C., Munguia, L.M., Rothchild, D., So, D., Texier, M. and Dean, J. (2021). Carbon emissions and large neural network training. arXiv preprint arXiv:2104.10350. Article
- Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., ... & Dennison, D. (2015). Hidden technical debt in machine learning systems. Advances in neural information processing systems, 28. Article
Google
- Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., ... & Young, M. (2014, December). Machine learning: The high interest credit card of technical debt. In SE4ML: software engineering for machine learning (NIPS 2014 Workshop) (Vol. 111, p. 112). Article
Google
- Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243. Article
- Sustainable AI: AI for sustainability and the sustainability of AI (van Wynsberghe, A. 2021). AI and Ethics, 1-6
- Green Algorithms: Quantifying the carbon emissions of computation (Lannelongue, L. et al. 2020)
- C.-J. Wu, R. Raghavendra, U. Gupta, B. Acun, N. Ardalani, K. Maeng, G. Chang, F. Aga, J. Huang, C. Bai, M. Gschwind, A. Gupta, M. Ott, A. Melnikov, S. Candido, D. Brooks, G. Chauhan, B. Lee, H.-H. Lee, K. Hazelwood, Sustainable AI: Environmental implications, challenges and opportunities in Proceedings of the 5th Conference on Machine Learning and Systems (MLSys) (2022) vol. 4, pp. 795–813. Article
- Google Research on Responsible AI: https://research.google/pubs/?collection=responsible-ai
Google
- Pipeline-Aware Fairness: http://fairpipe.dssg.io
Computational reproducibility (when the results in a paper can be replicated using the exact code and dataset provided by the authors) is becoming a significant problem not only for academic but for practitionars who want to implement AI in their organizations and aim to resuse ideas from academia. Read more about this problem here.
- Barrett, M., Gerke, T. & D’Agostino McGowa, L. (2024). Causal Inference in R Book
Causal Inference
R
- Biecek, P., & Burzykowski, T. (2021). Explanatory model analysis: explore, explain, and examine predictive models. Chapman and Hall/CRC. Book
Explainability
Interpretability
Transparency
R
- Biecek, P. (2024). Adversarial Model Analysis. Book
Safety
Red Teaming
- Cunningham, Scott. (2021) Causal inference: The mixtape. Yale university press. Book
Causal Inference
- Fourrier, C. and et all. (2024) LLM Evaluation Guidebook. Github Repository. Web
LLM Evaluation
- Freiesleben, T. & Molnar, C. (2024). Supervised Machine Learning for Science: How to stop worrying and love your black box. Book
- Matloff, N et all. (2204) Data Science Looks at Discrimination Book
Fairness
R
- Molnar, C. (2020). Interpretable Machine Learning. Lulu.com. Book
Explainability
Interpretability
Transparency
R
- Huntington-Klein, Nick. (2012) The effect: An introduction to research design and causality. Chapman and Hall/CRC. Book
Causal Inference
- Trust in Machine Learning (Varshney, K., 2022)
Safety
Privacy
Drift
Fairness
Interpretability
Explainability
- Interpretable AI (Thampi, A., 2022)
Explainability
Fairness
Interpretability
- AI Fairness (Mahoney, T., Varshney, K.R., Hind, M., 2020
Report
Fairness
- Practical Fairness (Nielsen, A., 2021)
Fairness
- Hands-On Explainable AI (XAI) with Python (Rothman, D., 2020)
Explainability
- AI and the Law (Kilroy, K., 2021)
Report
Trust
Law
- Responsible Machine Learning (Hall, P., Gill, N., Cox, B., 2020)
Report
Law
Compliance
Safety
Privacy
- Privacy-Preserving Machine Learning
- Human-In-The-Loop Machine Learning: Active Learning and Annotation for Human-Centered AI
- Interpretable Machine Learning With Python: Learn to Build Interpretable High-Performance Models With Hands-On Real-World Examples
- Responsible AI (Hall, P., Chowdhury, R., 2023)
Governance
Safety
Drift
- ACS Code of Professional Conduct by Australian ICT (Information and Communication Technology)
- AI Standards Hub
- Association for Computer Machinery's Code of Ethics and Professional Conduct
- IEEE Global Initiative for Ethical Considerations in Artificial Intelligence (AI) and Autonomous Systems (AS)
- ISO/IEC's Standards for Artificial Intelligence
- Explainable Artificial Intelligence
Harvard University
- CS594 - Causal Inference and Learning
University of Illinois at Chicago
- Introduction to AI Ethics
Kaggle
- Practical Data Ethics
Fast.ai
- CS7880 - Rigorous Approaches to Data Privacy
Northeastern University
- CS860 - Algorithms for Private Data Analysis
University of Waterloo
- CIS 4230/5230 - Ethical Algorithm Design
University of Pennsylvania
- Introduction to ML Safety
Center for AI Safety
- AI Risk Database
MITRE
- AI Risk Repository
MIT
- ARC AGI
- Common Corpus
- An ImageNet replacement for self-supervised pretraining without humans
- Huggingface Data Sets
- The Stack
- A Framework for Ethical Decision Making
Markkula Center for Applied Ethics
- Data Ethics Canvas
Open Data Institute
- Deon
Python
Drivendata
- Ethics & Algorithms Toolkit
- RAI Toolkit
US Department of Defense
- Ada Lovelace Institute
United Kingdom
- AI Safety Institutes (or equivalent):
- Canada AISI
Canada
- EU AI Office
Europe
- Japan AISI
Japan
- Singapore AISI
Singapore
- UK AISI
United Kingdom
- US AISI
United States of America
- Canada AISI
- Centre pour la Securité de l'IA, CeSIA
France
- European Centre for Algorithmic Transparency
- Center for Human-Compatible AI
UC Berkeley
United States of America
- Center for Responsible AI
New York University
United States of America
- Montreal AI Ethics Institute
Canada
- Munich Center for Technology in Society (IEAI)
TUM School of Social Sciences and Technology
Germany
- National AI Centre's Responsible AI Network
Australia
- Open Data Institute
United Kingdom
- Stanford University Human-Centered Artificial Intelligence (HAI)
United States of America
- The Institute for Ethical AI & Machine Learning
- UNESCO Chair in AI Ethics & Governance
IE University
Spain
- University of Oxford Institute for Ethics in AI
University of Oxford
United Kingdom
- AI Policy Perspectives
- AI Policy Weekly
- AI Safety Newsletter
- AI Snake Oil
- Import AI
- Marcus on AI
- Navigating AI Risks
- One Useful Thing
- The AI Ethics Brief
- The AI Evaluation Substack
- The EU AI Act Newsletter
- The Machine Learning Engineer
- Turing Post
- Allianz's Principles for a responsible usage of AI
Allianz
- Asilomar AI principles
- European Commission's Guidelines for Trustworthy AI
- Google's AI Principles
Google
- IEEE's Ethically Aligned Design
IEEE
- Microsoft's AI principles
Microsoft
- OECD's AI principles
OECD
- Telefonica's AI principles
Telefonica
- The Institute for Ethical AI & Machine Learning: The Responsible Machine Learning Principles
Additional:
- FAIR Principles
Findability
Accessibility
Interoperability
Reuse
- Araujo, R. 2024. Understanding the First Wave of AI Safety Institutes: Characteristics, Functions, and Challenges. Institute for AI Policy and Strategy (IAPS) Article
- Buchanan, B. 2020. The AI triad and what it means for national security strategy. Center for Security and Emerging Technology. Article
- Corrigan, J. et al. 2023. The Policy Playbook: Building a Systems-Oriented Approach to Technology and National Security Policy. CSET (Center for Security and Emerging Technology) Article
- Curto, J. 2024. How Can Spain Remain Internationally Competitive in AI under EU Legislation? Article
- CSIS. 2024 The AI Safety Institute International Network: Next Steps and Recommendations. CSIS (Center for Strategic and International Studies) Article
- Gupta, Ritwik, et al. (2024). Data-Centric AI Governance: Addressing the Limitations of Model-Focused Policies. arXiv preprint arXiv:2409.17216 (Article)[https://arxiv.org/pdf/2409.17216]
- Hendrycks, D. et al. 2023. An overview of catastrophic AI risks. Center of AI Safety. arXiv preprint arXiv:2306.12001. Article
- Janjeva, A., et al. (2023). Strengthening Resilience to AI Risk. A guide for UK policymakers. CETaS (Centre for Emerging Technology and Security) Article
- Piattini, M. and Fernández C.M. 2024. Marco Confiable. Revista SIC 162 Article
- Sastry, G., et al. 2024. Computing Power and the Governance of Artificial Intelligence. arXiv preprint arXiv:2402.08797. Article
- AI Incident Database
- AI Vulnerability Database (AVID)
- AIAAIC
- AI Badness: An open catalog of generative AI badness
- George Washington University Law School's AI Litigation Database
- Merging AI Incidents Research with Political Misinformation Research: Introducing the Political Deepfakes Incidents Database
- OECD AI Incidents Monitor
- Verica Open Incident Database (VOID)
- State of AI - from 2018 up to now -
- The AI Index Report - from 2017 up to now -
Stanford Institute for Human-Centered Artificial Intelligence
- Four Principles of Explainable Artificial Intelligence
NIST
Explainability
- Psychological Foundations of Explainability and Interpretability in Artificial Intelligence
NIST
Explainability
- Inferring Concept Drift Without Labeled Data, 2021
Drift
- Interpretability, Fast Forward Labs, 2020
Interpretability
- Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270)
NIST
Bias
- Auditing machine learning algorithms
Auditing
- FrontierMath
- Geekbench AI
- Jailbreakbench
Python
- LiveBench: A Challenging, Contamination-Free LLM Benchmark
Contamination free
- ML Commons Safety Benchmark for general purpose AI chat model
- MLPerf Training Benchmark
Training
- MMMU
Apple
Python
- StrongREJECT jailbreak benchmark
Python
- τ-bench: A Benchmark for Tool-Agent-User Interaction in Real-World Domains
Python
- Yet Another Applied LLM Benchmark
Python
- VLMEvalKit
Python
- CausalAI
Python
Salesforce
- CausalNex
Python
- CausalImpact
R
- Causalinference
Python
- Causal Inference 360
Python
- CausalPy
Python
- CIMTx: Causal Inference for Multiple Treatments with a Binary Outcome
R
- dagitty
R
- DoWhy
Python
Microsoft
- mediation: Causal Mediation Analysis
R
- MRPC
R
- Alibi Detect
Python
- Deepchecks
Python
- drifter
R
- Evidently
Python
- nannyML
Python
- phoenix
Python
- Aequitas' Bias & Fairness Audit Toolkit
Python
- AI360 Toolkit
Python
R
IBM
- dsld: Data Science Looks at Discrimination
R
- EDFfair: Explicitly Deweighted Features
R
- EquiPy
Python
- Fairlearn
Python
Microsoft
- Fairmodels
R
University of California
- fairness
R
- FairRankTune
Python
- FairPAN - Fair Predictive Adversarial Network
R
- OxonFair
Python
Oxford Internet Institute
- Themis ML
Python
- What-If Tool
Python
Google
- Alibi Explain
Python
- Automated interpretability
Python
OpenAI
- AI360 Toolkit
Python
R
IBM
- aorsf: Accelerated Oblique Random Survival Forests
R
- breakDown: Model Agnostic Explainers for Individual Predictions
R
- captum
Python
PyTorch
- ceterisParibus: Ceteris Paribus Profiles
R
- DALEX: moDel Agnostic Language for Exploration and eXplanation
Python
R
- DALEXtra: extension for DALEX
Python
R
- Dianna
Python
- Diverse Counterfactual Explanations (DiCE)
Python
Microsoft
- dtreeviz
Python
- ecco article
Python
- eli5
Python
- explabox
Python
National Police Lab AI
- eXplainability Toolbox
Python
- ExplainaBoard
Python
Carnegie Mellon University
- ExplainerHub in github
Python
- fastshap
R
- fasttreeshap
Python
LinkedIn
- FAT Forensics
Python
- flashlight
R
- Human Learn
Python
- hstats
R
- innvestigate
Python
Neural Networks
- intepretML
Python
- interactions: Comprehensive, User-Friendly Toolkit for Probing Interactions
R
- kernelshap: Kernel SHAP
R
- Learning Interpretability Tool
Python
Google
- lime: Local Interpretable Model-Agnostic Explanations
R
- Network Dissection
Python
Neural Networks
MIT
- OmniXAI
Python
Salesforce
- Shap
Python
- Shapash
Python
- shapper
R
- shapviz
R
- Skater
Python
Oracle
- survex
R
- teller
Python
- TCAV (Testing with Concept Activation Vectors)
Python
- truelens
Python
Truera
- truelens-eval
Python
Truera
- pre: Prediction Rule Ensembles
R
- Vetiver
R
Python
Posit
- vip
R
- vivid
R
- XAI - An eXplainability toolbox for machine learning
Python
The Institute for Ethical Machine Learning
- xplique
Python
- XAIoGraphs
Python
Telefonica
- Zennit
Python
- imodels
Python
- imodelsX
Python
- interpretML
Python
Microsoft
R
- PiML Toolbox
Python
- Tensorflow Lattice
Python
Google
- COMPL-AI
Python
ETH Zurich
Insait
LaticeFlow AI
- AlignEval: Making Evals Easy, Fun, and Semi-Automated Motivation
- Azure AI Evaluation
Python
Microsoft
- DeepEval
Python
- evals
Python
OpenAI
- FBI: Finding Blindspots in LLM Evaluations with Interpretable Checklists
Python
- Giskard
Python
- Inspect
AISI
Python
- LightEval
HuggingFace
Python
- LM Evaluation Harness
Python
- Moonshoot
AI Verify Foundation
Python
- opik
Comet
Python
- Phoenix
Arize AI
Python
- Prometheus
Python
- Promptfoo
Python
- ragas
Python
- Rouge
Python
- simple evals
Python
OpenAI
- WindowsAgentArena
Python
Microsoft
- auditor
R
- automl: Deep Learning with Metaheuristic
R
- AutoKeras
Python
- Auto-Sklearn
Python
- DataPerf
Python
Google
- deepchecks
Python
- EloML
R
- Featuretools
Python
- LOFO Importance
Python
- forester
R
- metrica: Prediction performance metrics
R
- NNI: Neural Network Intelligence
Python
Microsoft
- performance
R
- rliable
Python
Google
- TensorFlow Model Analysis
Python
Google
- TPOT
Python
- Unleash
Python
- Yellowbrick
Python
- WeightWatcher (Examples)
Python
- Copyright Traps for Large Language Models
Python
- Nightshade
University of Chicago
Tool
- Glaze
University of Chicago
Tool
- Fawkes
University of Chicago
Tool
- BackPACK
Python
- DataSynthesizer: Privacy-Preserving Synthetic Datasets
Python
Drexel University
University of Washington
- diffpriv
R
- Diffprivlib
Python
IBM
- Discrete Gaussian for Differential Privacy
Python
IBM
- Opacus
Python
Facebook
- Privacy Meter
Python
National University of Singapore
- PyVacy: Privacy Algorithms for PyTorch
Python
- SEAL
Python
Microsoft
- SmartNoise
Python
OpenDP
- Tensorflow Privacy
Python
Google
- openXAI
Python
- Adversarial Robustness Toolbox (ART)
Python
- BackdoorBench
Python
- Foolbox
Python
- Guardrails
Python
- NeMo Guardrails
Python
Amazon
- https://github.com/usnistgov/dioptra
Python
NIST
- Garak
Python
Nvidia
- Counterfit
Python
Microsoft
- Modelscan
Python
- NB Defense
Python
- Rebuff Playground
Python
- Turing Data Safe Haven
Python
The Alan Turing Institute
For consumers:
- Azure Sustainability Calculator
Microsoft
- Carbon Tracker Website
Python
- CodeCarbon Website
Python
- Computer Progress
- Impact Framework
API
- Dr. Why
R
Warsaw University of Technology
- Responsible AI Widgets
R
Microsoft
- The Data Cards Playbook
Python
Google
- Mercury
Python
BBVA
- Deepchecks
Python
- AudioSeal: Proactive Localized Watermarking
Python
Facebook
- MarkLLM: An Open-Source Toolkit for LLM Watermarking
Python
- SynthID Text
Python
Google
What are regulations?
Regulations are requirements established by governments.
- Data Protection and Privacy Legislation Worldwide
UNCTAD
- Data Protection Laws of the Word
- Digital Policy Alert
- GDPR Comparison
- National AI policies & strategies
- SCL Artificial Intelligence Contractual Clauses
- Algorithmic Impact Assessment tool
- Directive on Automated Decision-Making
- Directive on Privacy Practices
- Directive on Security Management
- Directive on Service and Digital
- Policy on Government Security
- Policy on Service and Digital
- Privacy Act
Short Name | Code | Description | Status | Website | Legal text |
---|---|---|---|---|---|
Cyber Resilience Act (CRA) - horizontal cybersecurity requirements for products with digital elements | 2022/0272(COD) | It introduces mandatory cybersecurity requirements for hardware and software products, throughout their whole lifecycle. | Proposal | Website | Source |
Data Act | EU/2023/2854 | It enables a fair distribution of the value of data by establishing clear and fair rules for accessing and using data within the European data economy. | Published | Website | Source |
Data Governance Act | EU/2022/868 | It supports the setup and development of Common European Data Spaces in strategic domains, involving both private and public players, in sectors such as health, environment, energy, agriculture, mobility, finance, manufacturing, public administration and skills. | Published | Website | Source |
Digital Market Act | EU/2022/1925 | It establishes a set of clearly defined objective criteria to identify “gatekeepers”. Gatekeepers are large digital platforms providing so called core platform services, such as for example online search engines, app stores, messenger services. Gatekeepers will have to comply with the do’s (i.e. obligations) and don’ts (i.e. prohibitions) listed in the DMA. | Published | Website | Source |
Digital Services Act | EU/2022/2026 | It regulates online intermediaries and platforms such as marketplaces, social networks, content-sharing platforms, app stores, and online travel and accommodation platforms. Its main goal is to prevent illegal and harmful activities online and the spread of disinformation. It ensures user safety, protects fundamental rights, and creates a fair and open online platform environment. | Published | Website | Source |
DMS Directive | EU/2019/790 | It is intended to ensure a well-functioning marketplace for copyright. | Published | Website | Source |
Energy Efficiency Directive | EU/2023/1791 | It establishes ‘energy efficiency first’ as a fundamental principle of EU energy policy, giving it legal-standing for the first time. In practical terms, this means that energy efficiency must be considered by EU countries in all relevant policy and major investment decisions taken in the energy and non-energy sectors. | Published | Website | Source |
EU AI ACT | EU/2024/1689 | It assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk are banned. Second, high-risk applications are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated. | Published | Website | Source |
General Data Protection Regulation (GDPR) | EU/2016/679 | It strengthens individuals' fundamental rights in the digital age and facilitate business by clarifying rules for companies and public bodies in the digital single market. | Published | Website | Source |
- State consumer privacy laws: California (CCPA and its amendment, CPRA), Virginia (VCDPA), and Colorado (ColoPA).
- Specific and limited privacy data laws: HIPAA, FCRA, FERPA, GLBA, ECPA, COPPA, VPPA and FTC.
- EU-U.S. and Swiss-U.S. Privacy Shield Frameworks - The EU-U.S. and Swiss-U.S. Privacy Shield Frameworks were designed by the U.S. Department of Commerce and the European Commission and Swiss Administration to provide companies on both sides of the Atlantic with a mechanism to comply with data protection requirements when transferring personal data from the European Union and Switzerland to the United States in support of transatlantic commerce.
- Executive Order on Maintaining American Leadership in AI - Official mandate by the President of the US to Privacy Act of 1974 - The privacy act of 1974 which establishes a code of fair information practices that governs the collection, maintenance, use and dissemination of information about individuals that is maintained in systems of records by federal agencies.
- Privacy Protection Act of 1980 - The Privacy Protection Act of 1980 protects journalists from being required to turn over to law enforcement any work product and documentary materials, including sources, before it is disseminated to the public.
- AI Bill of Rights - The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from IA threats based on five principles: Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, and Human Alternatives, Consideration, and Fallback.
What are standards?
Standards are voluntary, consensus solutions. They document an agreement on how a material, product, process, or service should be specified, performed or delivered. They keep people safe and ensure things work. They create confidence and provide security for investment.
Standards can be understood as formal specifications of best practices as well. There is a growing number of standards related to AI. You can search for the latest in the Standards Database from AI Standards Hub.
Domain | Standard | Status | URL |
---|---|---|---|
IEEE Guide for an Architectural Framework for Explainable Artificial Intelligence | IEEE 2894-2024 | Published | Source |
IEEE Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems | IEEE 7014-2024 | Published | Source |
Domain | Standard | Status | URL |
---|---|---|---|
Calidad del dato | UNE 0079:2023 | Published | Source |
Gestión del dato | UNE 0078:2023 | Published | Source |
Gobierno del dato | UNE 0077:2023 | Published | Source |
Guía de evaluación de la Calidad de un Conjunto de Datos. | UNE 0081:2023 | Published | Source |
Guía de evaluación del Gobierno, Gestión y Gestión de la Calidad del Dato. | UNE 0080:2023 | Published | Source |
Domain | Standard | Status | URL |
---|---|---|---|
AI Concepts and Terminology | ISO/IEC 22989:2022 Information technology — Artificial intelligence — Artificial intelligence concepts and terminology | Published | https://www.iso.org/standard/74296.html |
AI Risk Management | ISO/IEC 23894:2023 Information technology - Artificial intelligence - Guidance on risk management | Published | https://www.iso.org/standard/77304.html |
AI Management System | ISO/IEC DIS 42001 Information technology — Artificial intelligence — Management system | Published | https://www.iso.org/standard/81230.html |
Biases in AI | ISO/IEC TR 24027:2021 Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making | Published | https://www.iso.org/standard/77607.html |
AI Performance | ISO/IEC TS 4213:2022 Information technology — Artificial intelligence — Assessment of machine learning classification performance | Published | https://www.iso.org/standard/79799.html |
Ethical and societal concerns | ISO/IEC TR 24368:2022 Information technology — Artificial intelligence — Overview of ethical and societal concerns | Published | https://www.iso.org/standard/78507.html |
Explainability | ISO/IEC AWI TS 6254 Information technology — Artificial intelligence — Objectives and approaches for explainability of ML models and AI systems | Under Development | https://www.iso.org/standard/82148.html |
AI Sustainability | ISO/IEC AWI TR 20226 Information technology — Artificial intelligence — Environmental sustainability aspects of AI systems | Under Development | https://www.iso.org/standard/86177.html |
AI Verification and Validation | ISO/IEC AWI TS 17847 Information technology — Artificial intelligence — Verification and validation analysis of AI systems | Under Development | https://www.iso.org/standard/85072.html |
AI Controllabitlity | ISO/IEC CD TS 8200 Information technology — Artificial intelligence — Controllability of automated artificial intelligence systems | Published | https://www.iso.org/standard/83012.html |
Biases in AI | ISO/IEC CD TS 12791 Information technology — Artificial intelligence — Treatment of unwanted bias in classification and regression machine learning tasks | Published | https://www.iso.org/standard/84110.html |
AI Impact Assessment | ISO/IEC AWI 42005 Information technology — Artificial intelligence — AI system impact assessment | Under Development | https://www.iso.org/standard/44545.html |
Data Quality for AI/ML | ISO/IEC DIS 5259 Artificial intelligence — Data quality for analytics and machine learning (ML) (1 to 6) | Published | https://www.iso.org/standard/81088.html |
Data Lifecycle | ISO/IEC FDIS 8183 Information technology — Artificial intelligence — Data life cycle framework | Published | https://www.iso.org/standard/83002.html |
Audit and Certification | ISO/IEC CD 42006 Information technology — Artificial intelligence — Requirements for bodies providing audit and certification of artificial intelligence management systems | Under Development | https://www.iso.org/standard/44546.html |
Transparency | ISO/IEC AWI 12792 Information technology — Artificial intelligence — Transparency taxonomy of AI systems | Under Development | https://www.iso.org/standard/84111.html |
AI Quality | ISO/IEC AWI TR 42106 Information technology — Artificial intelligence — Overview of differentiated benchmarking of AI system quality characteristics | Under Development | https://www.iso.org/standard/86903.html |
Trustworthy AI | ISO/IEC TR 24028:2020 Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence | Published | https://www.iso.org/standard/77608.html |
Synthetic Data | ISO/IEC AWI TR 42103 Information technology — Artificial intelligence — Overview of synthetic data in the context of AI systems | Under Development | https://www.iso.org/standard/86899.html |
AI Security | ISO/IEC AWI 27090 Cybersecurity — Artificial Intelligence — Guidance for addressing security threats and failures in artificial intelligence systems | Under Development | https://www.iso.org/standard/56581.html |
AI Privacy | ISO/IEC AWI 27091 Cybersecurity and Privacy — Artificial Intelligence — Privacy protection | Under Development | https://www.iso.org/standard/56582.html |
AI Governance | ISO/IEC 38507:2022 Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations | Published | https://www.iso.org/standard/56641.html |
AI Safety | ISO/IEC CD TR 5469 Artificial intelligence — Functional safety and AI systems | Published | https://www.iso.org/standard/81283.html |
Beneficial AI Systems | ISO/IEC AWI TR 21221 Information technology – Artificial intelligence – Beneficial AI systems | Under Development | https://www.iso.org/standard/86690.html |
- NIST AI Risk Management Framework
- NIST RMF Crosswalks
- NIST Technical and Policy Documents
- NIST RMF Use Cases
- NIST Assessing Risks and Impacts of AI (ARIA)
Additional standards can be found using the Standards Database.
Contributors with over 50 edits can be named coauthors in the citation of visible names. Otherwise, all contributors with fewer than 50 edits are included under "et al."
@misc{arai_repo,
author={Josep Curto et al.},
title={Awesome Responsible Artificial Intelligence},
year={2024},
note={\url{https://github.com/AthenaCore/AwesomeResponsibleAI}}
}
ACM (Association for Computing Machinery)
Curto, J., et al. 2024. Awesome Responsible Artificial Intelligence. GitHub. https://github.com/AthenaCore/AwesomeResponsibleAI.
APA (American Psychological Association) 7th Edition
Curto, J., et al. (2024). Awesome Responsible Artificial Intelligence. GitHub. https://github.com/AthenaCore/AwesomeResponsibleAI.
Chicago Manual of Style 17th Edition
Curto, J., et al. "Awesome Responsible Artificial Intelligence." GitHub. Last modified 2024. https://github.com/AthenaCore/AwesomeResponsibleAI.
MLA (Modern Language Association) 9th Edition
Curto, J., et al. "Awesome Responsible Artificial Intelligence". GitHub, 2024, https://github.com/AthenaCore/AwesomeResponsibleAI. Accessed 29 Oct 2024.