ICLR2022对抗攻击&防御论文汇总
ICLR 2022 Conference | OpenReview
攻击
On Improving Adversarial Transferability of Vision Transformers
Attacking deep networks with surrogate-based adversarial black-box methods is easy
Rethinking Adversarial Transferability from a Data Distribution Perspective
Query Efficient Decision Based Sparse Attacks Against Black-Box Deep Learning Models
Data Poisoning Won’t Save You From Facial Recognition
Transferable Adversarial Attack based on Integrated Gradients
Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains
How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data
Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent
Provably Robust Adversarial Examples
防御
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
Reverse Engineering of Imperceptible Adversarial Image Perturbations
Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks
Towards Evaluating the Robustness of Neural Networks Learned by Transduction
Post-Training Detection of Backdoor Attacks for Two-Class and Multi-Attack Scenarios
Backdoor Defense via Decoupling the Training Process
Adversarial Unlearning of Backdoors via Implicit Hypergradient
Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent
Towards Understanding the Robustness Against Evasion Attack on Categorical Data
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis
Adversarial Robustness Through the Lens of Causality
Self-ensemble Adversarial Training for Improved Robustness
Trigger Hunting with a Topological Prior for Trojan Detection
Provably Robust Adversarial Examples
A Unified Wasserstein Distributional Robustness Framework for Adversarial Training
On the Certified Robustness for Ensemble Models and Beyond
Defending Against Image Corruptions Through Adversarial Augmentations
Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness
On the Convergence of Certified Robust Training with Interval Bound Propagation
Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100
Exploring Memorization in Adversarial Training
???