700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > 学习笔记:【VALSE短教程】《Adversarial Attack and Defense》

学习笔记:【VALSE短教程】《Adversarial Attack and Defense》

时间:2021-05-27 01:46:59

相关推荐

学习笔记:【VALSE短教程】《Adversarial Attack and Defense》

学习笔记:【VALSE短教程】《Adversarial Attack and Defense》

视频地址

1、White-box attacks

Direction I

论文地址:

EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES

论文地址:

ADVERSARIAL EXAMPLES IN THE PHYSICAL WORLD

论文地址:

Towards Deep Learning Models Resistant to Adversarial Attacks

Direction II

分错条件下找到扰动最小的对抗样本。

论文地址:

DeepFool: a simple and accurate method to fool deep neural networks

论文地址:

Towards Evaluating the Robustness of Neural Networks

2、Black-box Attack

1、Transferability-based Attack

论文地址:

Boosting Adversarial Attacks with Momentum

NESTEROV ACCELERATED GRADIENT AND SCALE INVARIANCE FOR ADVERSARIAL ATTACKS

论文地址:

Towards Understanding and Improving the Transferability of Adversarial Examples in Deep Neural Networks

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks

Improving Transferability of Adversarial Examples with Input Diversity

论文地址:

SKIP CONNECTIONS MATTER: ON THE TRANSFERABILITY OF ADVERSARIAL EXAMPLES GENERATED WITH RESNETS

2、Query-based Adversarial Attack

论文地址:

DECISION-BASED ADVERSARIAL ATTACKS: RELIABLE ATTACKS AGAINST BLACK-BOX MACHINE LEARNING MODELS

A Ray Searching Method for Hard-label Adversarial Attack

ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models

Auto ZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks

Square Attack: a query-efficient black-box adversarial attack via random search

Black-box Adversarial Attacks with Limited Queries and Information

N ATTACK:Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks

Improving Query Efficiency of Black-box Adversarial Attack

论文地址:

Adversarial Patch

Robust Physical-World Attacks on Deep Learning Visual Classification

论文地址:

Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles

论文地址:

BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain

论文地址:

Towards Deep Learning Models Resistant to Adversarial Attacks

论文地址:

On the Convergence and Robustness of Adversarial Training

论文地址:

Adversarial Weight Perturbation HelpsRobust Generalization

论文地址:

Visualizing the Loss Landscape of Neural Nets

论文地址:

Understanding Adversarial Robustness Through Loss Landscape Geometries

INTERPRETING ADVERSARIAL ROBUSTNESS:A VIEW FROM DECISION SURFACE IN INPUT SPACE

论文地址:

Theoretically Principled Trade-off between Robustness and Accuracy

论文地址:

IMPROVING ADVERSARIAL ROBUSTNESS REQUIRES REVISITING MISCLASSIFIED EXAMPLES

论文地址:

Adversarial Neuron Pruning Purifies Backdoored Deep Models

论文地址:

Spectral Signatures in Backdoor Attacks

DEEP PARTITION AGGREGATION:PROVABLE DEFENSES AGAINST GENERAL POISONING

ATTACKS

Data Poisoning against Differentially-Private Learners: Attacks and Defenses

STRONG DATA AUGMENTATION SANITIZES POISONING AND BACKDOOR ATTACKS WITHOUT AN ACCURACY TRADEOFF

Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks

Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks

REFIT: A Unified Watermark Removal Framework For Deep Learning Systems With Limited Data

论文地址:

Feature Denoising for Improving Adversarial Robustness

论文地址:

Adversarial Examples Improve Image Recognition

论文地址:

Improving Adversarial Robustness via Channel-wise Activation Suppressing

论文地址:

Implicit Euler Skip Connections: Enhancing Adversarial Robustness via Numerical Stability

论文地址:

Unlearnable Examples: Making Personal Data Unexploitable

论文地址:

Unadversarial Examples: Designing Objects for Robust Vision

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。