Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
-
Updated
Nov 25, 2024 - Python
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
AdNauseam: Fight back against advertising surveillance
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
A Toolbox for Adversarial Robustness Research
A pytorch adversarial library for attack and defense methods on images and graphs
Raising the Cost of Malicious AI-Powered Image Editing
🗣️ Tool to generate adversarial text examples and test machine learning models against them
Implementation of Papers on Adversarial Examples
Adversarial attacks and defenses on Graph Neural Networks.
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
💡 Adversarial attacks on explanations and how to defend them
auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
A curated list of awesome resources for adversarial examples in deep learning
alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, 2023, and 2024)
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)
A curated list of papers on adversarial machine learning (adversarial examples and defense methods).
DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
PhD/MSc course on Machine Learning Security (Univ. Cagliari)
Add a description, image, and links to the adversarial-examples topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-examples topic, visit your repo's landing page and select "manage topics."