Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations
Apostol Vassilevnvlpubs.nist.govSaved by Chad Hudson
Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations
Saved by Chad Hudson
Adversarial examples became even more intriguing to the research community when Szedegy et al. [288] showed that deep neural networks used for image classifcation can be easily manipulated, and adversarial examples were visualized. In the context of image classifcation, the perturbation of the original sample must be small so that a human cannot ob
... See moreA taxonomy of the most widely studied and effective attacks in AML, including – evasion, poisoning, and privacy attacks for PredAI systems, – evasion, poisoning, privacy, and abuse/misuse attacks for GenAI systems; ... – attacks against all viable learning methods (e.g., supervised, unsupervised, semisupervised, federated learning, reinforcement le
... See moreFigure 1 connects each attack class with the capabilities required to mount the attack. For instance, backdoor attacks that cause integrity violations require control of training data and testing data to insert the backdoor pattern. Backdoor attacks can also be mounted via source code control, particularly when training is outsourced to a more powe
... See moreQUERY ACCESS: When the ML model is managed by a cloud provider (using Machine Learning as a Service – MLaaS), the attacker might submit queries to the model and receive predictions (either labels or model confdences). This capability is used by black-box evasion attacks, ENERGY-LATENCY ATTACKS, and all privacy attacks.
Image: Adversarial examples of image data modality [120, 288] have the advantage ... of a continuous domain, and gradient-based methods can be applied directly for optimization. Backdoor poisoning attacks were frst invented for images [124], and many privacy attacks are run on image datasets (e.g., [270]). The image modality includes other types of
... See moreDesigning ML models robust in face of supply-chain vulnerabilities is a critical open problem that needs to be addressed by the community.
In the last few years, many of the proposed mitigations against adversarial examples have been ineffective against stronger attacks. Furthermore, several papers have performed extensive evaluations and defeated a large number of proposed mitigations:
Fundamentally, the machine learning methodology used in modern AI systems is susceptible to attacks through the public APIs that expose the model, and against the platforms on which they are deployed. This report focuses on the former and considers the latter to be the scope of traditional cybersecurity taxonomies.
In a MODEL POISONING attack [185], the adversary controls the model and its parameters.