[2107.08590] EvilModel: Hiding Malware Inside of Neural Network Models

Se un modello non è spiegabile (come non è spiegabile), soprattutto in casi di online learning, come proteggersi da attacchi al modelli? e in primo luogo, come rilevarli ?

Ci vuole il redress by design…

Source : Arxiv

 Cryptography and Security[Submitted on 19 Jul 2021 (v1), last revised 5 Aug 2021 (this version, v4)]

EvilModel: Hiding Malware Inside of Neural Network Models Delivering malware covertly and evasively is critical to advanced malware campaigns. In this paper, we present a new method to covertly and evasively deliver malware through a neural network model. Neural network models are poorly explainable and have a good generalization ability. By embedding malware in neurons, the malware can be delivered covertly, with minor or no impact on the performance of neural network. Meanwhile, because the structure of the neural network model remains unchanged, it can pass the security scan of antivirus engines. Experiments show that 36.9MB of malware can be embedded in a 178MB-AlexNet model within 1% accuracy loss, and no suspicion is raised by anti-virus engines in VirusTotal, which verifies the feasibility of this method. With the widespread application of artificial intelligence, utilizing neural networks for attacks becomes a forwarding trend. We hope this work can provide a reference scenario for the defense on neural network-assisted attacks.

Scaricabile qui: [2107.08590] EvilModel: Hiding Malware Inside of Neural Network Models.

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *