0%

CVPR2023对抗攻击&防御论文汇总

CVPR2023对抗攻击&防御论文汇总

下面论文部分来自这个链接,粗略分类,可能有遗漏

攻击

Re-Thinking Model Inversion Attacks Against Deep Neural Networks

Minimizing Maximum Model Discrepancy for Transferable Black-Box Targeted Attacks

Color Backdoor: A Robust Poisoning Attack in Color Space

Effective Ambiguity Attack Against Passport-Based DNN Intellectual Property Protection Schemes Through Fully Connected Layer Substitution

Sibling-Attack: Rethinking Transferable Adversarial Attacks Against Face Recognition

Physical-World Optical Adversarial Attacks on 3D Face Recognition

Proximal Splitting Adversarial Attack for Semantic Segmentation

Turning Strengths Into Weaknesses: A Certified Robustness Inspired Attack Framework Against Graph Neural Networks

Discrete Point-Wise Attack Is Not Enough: Generalized Manifold Adversarial Attack for Face Recognition

Transferable Adversarial Attacks on Vision Transformers With Token Gradient Regularization

TrojDiff: Trojan Attacks on Diffusion Models With Diverse Targets

T-SEA: Transfer-Based Self-Ensemble Attack on Object Detection

Rate Gradient Approximation Attack Threats Deep Spiking Neural Networks

Enhancing the Self-Universality for Transferable Targeted Attacks

Backdoor Attacks Against Deep Image Compression via Adaptive Frequency Trigger

Ensemble-Based Blackbox Attacks on Dense Prediction

Black-Box Sparse Adversarial Attack via Multi-Objective Optimisation

Progressive Backdoor Erasing via Connecting Backdoor and Adversarial Attacks

Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders

Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks

Dynamic Generative Targeted Attacks With Pattern Injection

Reinforcement Learning-Based Black-Box Model Inversion Attacks

StyLess: Boosting the Transferability of Adversarial Examples

Introducing Competition To Boost the Transferability of Targeted Adversarial Examples Through Clean Feature Mixup

Towards Transferable Targeted Adversarial Examples

Improving the Transferability of Adversarial Samples by Path-Augmented Method

Physically Adversarial Infrared Patches with Learnable Shapes and Locations

Towards Effective Adversarial Textured 3D Meshes on Physical Face Recognition

Physical-World Optical Adversarial Attacks on 3D Face Recognition

Single Image Backdoor Inversion via Robust Smoothed Classifiers

防御

Teacher-Generated Spatial-Attention Labels Boost Robustness and Accuracy of Contrastive Models

Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation

Defending Against Patch-Based Backdoor Attacks on Self-Supervised Learning

Backdoor Defense via Deconfounded Representation Learning

Backdoor Defense via Adaptively Splitting Poisoned Dataset

TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization

Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations

Revisiting Residual Networks for Adversarial Robustness

CFA: Class-Wise Calibrated Fair Adversarial Training

Jedi: Entropy-Based Localization and Removal of Adversarial Patches

The Enemy of My Enemy Is My Friend: Exploring Inverse Adversaries for Improving Adversarial Training

Exploring the Relationship Between Architectural Design and Adversarially Robust Generalization

Adversarially Robust Neural Architecture Search for Graph Neural Networks

Randomized Adversarial Training via Taylor Expansion

Feature Separation and Recalibration for Adversarial Robustness

Adversarial Robustness via Random Projection Filters

AGAIN: Adversarial Training With Attribution Span Enlargement and Hybrid Feature Fusion

Improving Robustness of Vision Transformers by Reducing Sensitivity To Patch Corruptions

Demystifying Causal Features on Adversarial Examples and Causal Inoculation for Robust Network by Adversarial Instrumental Variable Regression

Cooperation or Competition: Avoiding Player Domination for Multi-Target Robustness via Adaptive Budgets

Benchmarking Robustness of 3D Object Detection to Common Corruptions

Adversarial Robustness via Random Projection Filters

Generalist: Decoupling Natural and Robust Generalization

其他

Efficient Loss Function by Minimizing the Detrimental Effect of Floating-Point Errors on Gradient-Based Attacks

The Resource Problem of Using Linear Layer Leakage Attack in Federated Learning

Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack

Robust Single Image Reflection Removal Against Adversarial Attacks

You Are Catching My Attention: Are Vision Transformers Bad Learners Under Backdoor Attacks?

The Best Defense Is a Good Offense: Adversarial Augmentation Against Adversarial Attacks

SlowLiDAR: Increasing the Latency of LiDAR-Based Detection Using Adversarial Examples

Demystifying Causal Features on Adversarial Examples and Causal Inoculation for Robust Network by Adversarial Instrumental Variable Regression

Progressive Backdoor Erasing via Connecting Backdoor and Adversarial Attacks

CLIP2Protect: Protecting Facial Privacy Using Text-Guided Makeup via Adversarial Latent Search

Evading DeepFake Detectors via Adversarial Statistical Consistency

Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency

Don't Lie to Me! Robust and Efficient Explainability With Verified Perturbation Analysis