Securing Malware Cognitive Systems against Adversarial Attacks

IEEE Xplore logo
August 29, 2019

A cognitive system is self-learning by leveraging a combination of intelligent techniques, such as machine learning (ML), data mining. The cognitive systems along with the machine learning techniques have achieved great progress in recent years. They have provided breakthrough performance across many domains such as image processing, selfdriving vehicles, and cybersecurity.

Recent studies find that many of the cognitive systems are vulnerable to adversarial attacks. In essence, adversarial attacks try to cause the machine learning methods to misbehave or leak sensitive model information, and can take place throughout different stages of learning. In the training stage, data poisoning attack injects incorrectly or maliciously labeled samples into the training dataset to make the machine learning methods learn incorrectly. In the testing stage, evasion attacks tamper with test data to cause prediction errors. In addition, exploratory attacks will repeatedly test the learned model with edge-cases to reveal the decision boundary. In this paper, we are specifically focusing on evasion attacks as these are the most common attacks on machine learning models. Different adversarial defense techniques have been proposed. However, most of them are targeting the adversarial attacks in the computer vision problem [10]. Also, most defending techniques are only effective for a few attacks which are usually known to the designer in advance.

Read More