*Result*: Defending Deep Learning-Based Raw Malware Detectors Against Adversarial Attacks: A Sequence Modeling Approach.
*Further Information*
*Malware detectors are the first line of defense against cyber-attacks that damage Information Technology (IT) infrastructure. Recently, deep learning (DL)-based malware detectors have yielded breakthrough results in identifying unseen attacks without requiring feature engineering and expensive dynamic malware analysis in a sandbox. However, these detectors are susceptible to adversarial malware attacks. Emulating effective adversarial malware variants is instrumental in revealing the vulnerabilities of such systems and developing automated cyber defense. Current methods for launching such attacks often assume scenarios that require accessing insider knowledge about the architecture of the malware detector and/or cannot operate directly on raw malware files. We propose Adversarial Malware example Generation and Defense (AMGD), a novel framework to defend the detectors by automatically generating malware variants from raw executables without assuming any prior detector knowledge. AMGD is generalizable to multiple detectors as it can be trained on multiple malware detectors simultaneously. AMGD employs Independent Recurrent Neural Nets (IndRNNs) to offer a novel generative byte-level malware sequence model, named Mal-IndRNN, to evade DL-based malware detectors. Mal-IndRNN effectively evades three renowned DL-based malware detectors and outperforms benchmark methods. We utilize malware variants generated by Mal-IndRNN to improve the robustness of malware detectors against adversarial attacks on a real dataset. AMGD offers a practical approach to proactively accounting for the Artificial Intelligence (AI)-enabled adversary in the design and development phase of DL-based malware detectors rather than reactive measures after deployment. [ABSTRACT FROM AUTHOR]
Copyright of Journal of Management Information Systems is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)*
*Full text is not displayed to guests* *Login for full access*