## **PoC**
[the code](https://github.com/Sensente/Security-Attacks-on-LCCTs)
## **Details**
Showcases targeted attack methodologies on two critical security risks: jailbreaking and training data extraction attacks.
The results expose significant vulnerabilities within LCCTs,(LLM Code Complete Tools) including a 99.4% success rate in jailbreaking attacks on GitHub Copilot and a 46.3% success rate on Amazon Q. Furthermore, We successfully extracted sensitive user data from GitHub Copilot, including 54 real email addresses and 314 physical addresses associated with GitHub usernames. Our study also demonstrates that these code-based attack methods are effective against general-purpose LLMs, such as the GPT series, highlighting a broader security misalignment in the handling of code by modern LLMs.
[paper](https://arxiv.org/abs/2408.11006v2)
ID: AML.T0057