0%

AAAI2022对抗攻击&防御论文汇总

AAAI2022对抗攻击&防御文章汇总

AAAI2022 (virtualchair.net)

攻击

Learning to Learn Transferable Attack

Towards Transferable Adversarial Attacks on Vision Transformers

Sparse-RS: A Versatile Framework for Query-Efficient Sparse Black-Box Adversarial Attacks

Shape Prior Guided Attack: Sparser Perturbations on 3D Point Clouds

Adversarial Attack for Asynchronous Event-Based Data

CLPA: Clean-Label Poisoning Availability Attacks Using Generative Adversarial Nets

TextHoaxer: Budgeted Hard-Label Adversarial Attacks on Text

Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks

Hard to Forget: Poisoning Attacks on Certified Machine Unlearning

Attacking Video Recognition Models with Bullet-Screen Comments

Context-Aware Transfer Attacks for Object Detection

A Fusion-Denoising Attack on InstaHide with Data Augmentation

FCA: Learning a 3D Full-Coverage Vehicle Camouflage for Multi-View Physical Adversarial Attack

Backdoor Attacks on the DNN Interpretation System

Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs

Synthetic Disinformation Attacks on Automated Fact Verification Systems

Adversarial Bone Length Attack on Action Recognition

Improved Gradient Based Adversarial Attacks for Quantized Networks

Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification

Has CEO Gender Bias Really Been Fixed? Adversarial Attacking and Improving Gender Fairness in Image Search

Boosting the Transferability of Video Adversarial Examples via Temporal Translation

Learning Universal Adversarial Perturbation by Adversarial Example

Making Adversarial Examples More Transferable and Indistinguishable

Vision Transformers are Robust Learners

防御

Certified Robustness of Nearest Neighbors Against Data Poisoning and Backdoor Attacks

Preemptive Image Robustification for Protecting Users Against Man-in-the-Middle Adversarial Attacks

Practical Fixed-Parameter Algorithms for Defending Active Directory Style Attack Graphs

When Can the Defender Effectively Deceive Attackers in Security Games?

Robust Heterogeneous Graph Neural Networks against Adversarial Attacks

Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation

Consistency Regularization for Adversarial Robustness

Adversarial Robustness in Multi-Task Learning: Promises and Illusions

LogicDef: An Interpretable Defense Framework Against Adversarial Examples via Inductive Scene Graph Reasoning

Efficient Robust Training via Backward Smoothing

Input-Specific Robustness Certification for Randomized Smoothing

CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks