Latest: 10/26/24 version: 2.0.0
First published 10/26/23.
(Almost) all adversarial and supply chain capabilities are mapped to MITRE ATLAS. you can find the ID in the footer as well as in the page properties, supporting automation.
This is an amalgam of TTP's on different offensive ML attacks encompassing the ML supply chain and adversarial ML attacks.
It is focused heavily on attacks that have code you can use to perform the attacks right away, rather than a database of research papers. (PoC or GTFO type logic). Generally speaking if it is here I have tested it and it works.
The intent is to help red teams and offensive practitioners quickly understand what tool in the toolbox to use to attack ML environments.
This is a living vault. It is very much not a finished list of resources. There are pages that are polished, and some that are little more than placeholders with a few bullet points that I jotted down during conferences or on the fly.
The goal is to organize the attacks in a way that is useful to red team operators rather than useful for say, academics trying to understand adversarial ML.
OffensiveML is the application of ML for red team purposes. I explore this in detail in this blog
AdversarialML the sub discipline of attacks against ML.
Supply Chain Attacks encompasses attacks on unique ML upstreams, can usually be performed from the perimeter, or with typical implant-based system compromise.
DefensiveML The newest addition, focused on defensive applications of ML for blue team, and the defense of LLMs.
Open up the graph and see what appeals to you. The attacks are broken up into categories against different kinds of content, e.g image, LLMs, audio and by black and white box attacks. If you check out MLops an Supply Chain attacks, you can see attacks you can perform 'from the outside'.
This is a database of offensive ML TTP’s, broken down by supply chain attacks, offensive ML techniques and adversarial ML. The framework aims to simplify the decision making process of targeting ML in an organization.
Want to poison an LLM’s ground truths? We can do that. Want to put malware in a model and work out how to distribute it? We got the former and the latter. – Multiple ways!
Want to understand the latest in Offsec ML flywheels, droppers and obfuscators?
Or maybe hit an LLM via API endpoint with a repeated character sequences attack? We got that too.