CVPR2022对抗攻击&防御论文汇总
先占个坑,本文持续更新,论文阅读笔记后续有时间就补。
下面论文部分来自这个链接
攻击
Adversarial Texture for Fooling Person Detectors in the Physical World
Adversarial Eigen Attack on Black-Box Models
Bounded Adversarial Attack on Deep Content Features
Backdoor Attacks on Self-Supervised Learning
Bandits for Structure Perturbation-Based Black-Box Attacks To Graph Neural Networks With Theoretical Guarantees
Boosting Black-Box Attack With Partially Transferred Conditional Adversarial Distribution
BppAttack: Stealthy and Efficient Trojan Attacks Against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning
Cross-Modal Transferable Adversarial Attacks From Images to Videos
Can You Spot the Chameleon? Adversarially Camouflaging Images From Co-Salient Object Detection
DTA: Physical Camouflage Attacks using Differentiable Transformation Network
DST: Dynamic Substitute Training for Data-Free Black-Box Attack
Dual Adversarial Adaptation for Cross-Device Real-World Image Super-Resolution
DetectorDetective: Investigating the Effects of Adversarial Examples on Object Detectors
Exploring Effective Data for Surrogate Training Towards Black-Box Attack
Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity
Fairness-Aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models
FIBA: Frequency-Injection Based Backdoor Attack in Medical Image Analysis
Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations
Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness
Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input
Shape-Invariant 3D Adversarial Point Clouds
Stereoscopic Universal Perturbations Across Different Architectures and Datasets
Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability
Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer
Label-Only Model Inversion Attacks via Boundary Repulsion
Improving Adversarial Transferability via Neuron Attribution-Based Attacks
Improving the Transferability of Targeted Adversarial Examples Through Object-Based Diverse Input
Investigating Top-k White-Box and Transferable Black-Box Attack
Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network
Zero-Query Transfer Attacks on Context-Aware Object Detectors
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
Towards Efficient Data Free Blackbox Adversarial Attack
Transferable Sparse Adversarial Attack
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
Exploring Frequency Adversarial Attacks for Face Forgery Detection
360-Attack: Distortion-Aware Perturbations From Perspective-Views
防御
Enhancing Adversarial Training With Second-Order Statistics of Weights
Enhancing Adversarial Robustness for Deep Metric Learning
Improving Robustness Against Stealthy Weight Bit-Flip Attacks by Output Code Matching
Improving Adversarially Robust Few-Shot Image Classification With Generalizable Representations
Segment and Complete: Defending Object Detectors Against Adversarial Patch Attacks With Robust Patch Detection
Self-Supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection
Towards Practical Certifiable Patch Defense with Vision Transformer
Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack
LAS-AT: Adversarial Training with Learnable Attack Strategy
ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning
Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive Benchmark Analysis and Beyond
Defensive Patches for Robust Recognition in the Physical World
Understanding and Increasing Efficiency of Frank-Wolfe Adversarial Training
On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles
EyePAD++: A Distillation-Based Approach for Joint Eye Authentication and Presentation Attack Detection Using Periocular Images
其他
Appearance and Structure Aware Robust Deep Visual Graph Matching: Attack, Defense and Beyond
Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart
Robust Combination of Distributed Gradients Under Adversarial Perturbations
WarpingGAN: Warping Multiple Uniform Priors for Adversarial 3D Point Cloud Generation
Leveraging Adversarial Examples To Quantify Membership Information Leakage