NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems

Ecco.

Source: NIST

Adversaries can deliberately confuse or even “poison” artificial intelligence (AI) systems to make them malfunction — and there’s no foolproof defense that their developers can employ. Computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators identify these and other vulnerabilities of AI and machine learning (ML) in a new publication.

…“We are providing an overview of attack techniques and methodologies that consider all types of AI systems,” said NIST computer scientist Apostol Vassilev, one of the publication’s authors. “We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defenses.”

Continua qui: NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems | NIST

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *