In questo articolo (PDF) i ricercatori descrivono i risultati di una ricerca basata sull’uso di intelligenza artificiale per fregare sistemi di intelligenza artificiale. (in particolare sistemi che usano l’AI per scoprire malware)

Machine learning has been used to detect new malware in recent years, while malware authors have strong motivation to attack such algorithms.Malware authors usually have no access to the detailed structures and parameters of the machine learning models used by malware detection systems, and therefore they can only perform black-box attacks. This paper proposes a generative adversarial network (GAN) based algorithm named MalGAN to generate adversarial malware examples, which are able to bypass black-box machine learning based detection models.

Malware authors are able to frequently change the probability distribution by retraining MalGAN, making the black-box detector cannot
keep up with it, and unable to learn stable patterns from it.
Once the black-box detector is updated malware authors can immediately crack it. This process making machine learning based malware detection algorithms unable to work.

Il tema è una verticalizzazione del problema più generale degli esempi avversi, di cui avevo scritto una semplice spiegazione nel post “Problemi di sicurezza dell’intelligenza artificiale