2.793

                    2018影響因子

                    (CJCR)

                    • 中文核心
                    • EI
                    • 中國科技核心
                    • Scopus
                    • CSCD
                    • 英國科學文摘

                    留言板

                    尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

                    姓名
                    郵箱
                    手機號碼
                    標題
                    留言內容
                    驗證碼

                    面向網絡空間防御的對抗機器學習研究綜述

                    余正飛 閆巧 周鋆

                    余正飛, 閆巧, 周鋆. 面向網絡空間防御的對抗機器學習研究綜述. 自動化學報, 2021, 47(x): 1?25 doi: 10.16383/j.aas.c210089
                    引用本文: 余正飛, 閆巧, 周鋆. 面向網絡空間防御的對抗機器學習研究綜述. 自動化學報, 2021, 47(x): 1?25 doi: 10.16383/j.aas.c210089
                    Yu Zheng-Fei.jpg, yanqiao.jpg, Zhou Yun. A survey on adversarial machine learning for cyberspace defense. Acta Automatica Sinica, 2021, 47(x): 1?25 doi: 10.16383/j.aas.c210089
                    Citation: Yu Zheng-Fei.jpg, yanqiao.jpg, Zhou Yun. A survey on adversarial machine learning for cyberspace defense. Acta Automatica Sinica, 2021, 47(x): 1?25 doi: 10.16383/j.aas.c210089

                    面向網絡空間防御的對抗機器學習研究綜述

                    doi: 10.16383/j.aas.c210089
                    基金項目: 國家自然科學基金(61976142), 長沙市杰出創新青年培養計劃(KQ2009009)資助
                    詳細信息
                      作者簡介:

                      余正飛:國防科技大學系統工程學院博士研究生. 主要研究方向為對抗機器學習, 網絡安全. E-mail: yuzhengfei19@nudt.edu.cn

                      閆巧:深圳大學計算機與軟件學院教授. 主要研究方向為網絡安全, 人工智能等. 本文通信作者. E-mail: yanq@szu.edu.cn

                      周鋆:國防科技大學系統工程學院副教授. 主要研究方向為機器學習, 概率圖模型. 本文通信作者. E-mail: zhouyun@nudt.edu.cn

                    • 1 http://mls-nips07.first.fraunhofer.de/2 https://aisec.cc/3 https://www.kdd.org/kdd2014/program.html4 https://www.aaai.org/Workshops/ws16workshops.php#ws035 https://www.kaggle.com/c/nips-2017-defense-against-adversarial-attack6 https://www.crowdai.org/challenges/nips-2018-adversarial-vision-challenge-robust-model-track7 https://sites.google.com/view/advml8 https://tianchi.aliyun.com/competition/entrance/231745/introduction9 https://s.alibaba.com/conference10 https://mlhat.org/11 http://federated-learning.org/rseml2021/
                    • https://aisec.cc/
                    • https://www.kdd.org/kdd2014/program.html
                    • https://www.aaai.org/Workshops/ws16workshops.php#ws03
                    • https://www.kaggle.com/c/nips-2017-defense-against-adversarial-attack
                    • https://www.crowdai.org/challenges/nips-2018-adversarial-vision-challenge-robust-model-track
                    • https://sites.google.com/view/advml
                    • https://tianchi.aliyun.com/competition/entrance/231745/introduction
                    • https://s.alibaba.com/conference
                    • https://mlhat.org/
                    • http://federated-learning.org/rseml2021/
                    • 12 http://pdfrate.com/
                    • 13 http://contagiodump.blogspot.de/2010/08/malicious-documents-archive-for.html
                    • 14 BFGS是一種擬牛頓法(Quasi-Newton method), 主要思想是用BFGS矩陣作為擬牛頓法中的對稱正定迭代矩陣, 此法由C.G.Broyden、R.Fletcher、D.Goldfarb以及D.F.Shanno等四位研究者于1970年前后提出, 并以其名字首字母命名而來.15 C&W 算法以提出該算法的 Carlini 和 Wagner 兩位作者名字命名而來.
                    • C&W算法以提出該算法的Carlini和Wagner兩位作者名字命名而來.
                    • 16 https://www.virustotal.com/gui/home/upload
                    • 17 鯡魚(Herring)是一種常見的食用魚. 鯡魚肉經過煙熏后會變成紅色并帶有強烈氣味, 人們便把煙熏好的紅鯡魚(Red Herring)放在有狐貍出沒的地方, 用于測試獵犬的搜尋能力. 因此, 紅鯡魚也用于表示“轉移注意力的事物”.
                    • 18 油蛙攻擊起源于19世紀末美國康奈爾大學科學家開展的“水煮青蛙實驗”, 主要原理是通過多次微小攻擊達到投毒攻擊的目標.
                    • 19 http://www.mlsec.org/malheur/

                    A Survey on Adversarial Machine Learning for Cyberspace Defense

                    Funds: Supported by National Natural Science Foundation of China (61976142) and Training Program for Excellent Young Innovators of Changsha (KQ2009009)
                    More Information
                      Author Bio:

                      YU Zheng-Fei Ph.D. candidate at the College of Systems Engineering, National University of Defense and Technology. His research interest covers adversarial machine learning and network security

                      YAN Qian Professor at the College of Computer Sci-ence and Software Engineering, Shenzhen University. Her research interest covers network security and artificial in-telligence. Corresponding author of this paper

                      ZHOU Yun Assistant professor at the College of Sys-tems Engineering, National University of Defense and Technology. His research interest covers machine learning and probabilistic graphical models. Corresponding author of this paper

                    • 摘要: 機器學習以強大的自適應性、自學習能力, 成為網絡空間防御的研究熱點和重要方向. 然而, 機器學習模型在網絡空間環境下存在受到對抗攻擊的潛在風險, 可能成為防御體系中最為薄弱的環節, 從而危害整個系統的安全. 為此, 科學分析安全問題場景, 從運行機理上探索算法可行性、安全性, 對運用機器學習模型構建網絡空間防御系統大有裨益. 本文全面綜述對抗機器學習這一跨學科研究領域在網絡空間防御中取得的成果及以后的發展方向. 首先介紹了網絡空間防御、對抗機器學習等背景知識. 其次, 針對機器學習在網絡空間防御中可能遭受的攻擊, 引入機器學習敵手模型概念, 目的是科學評估其在特定威脅場景下的安全屬性. 而后, 針對網絡空間防御的機器學習算法, 分別論述了在測試階段發動規避攻擊、在訓練階段發動投毒攻擊、在機器學習全階段發動隱私竊取的方法, 進而研究如何在網絡空間對抗環境下, 強化機器學習模型的防御方法. 最后, 展望了網絡空間防御中對抗機器學習研究的未來方向和有關挑戰.
                      1)  1 http://mls-nips07.first.fraunhofer.de/2 https://aisec.cc/3 https://www.kdd.org/kdd2014/program.html4 https://www.aaai.org/Workshops/ws16workshops.php#ws035 https://www.kaggle.com/c/nips-2017-defense-against-adversarial-attack6 https://www.crowdai.org/challenges/nips-2018-adversarial-vision-challenge-robust-model-track7 https://sites.google.com/view/advml8 https://tianchi.aliyun.com/competition/entrance/231745/introduction9 https://s.alibaba.com/conference10 https://mlhat.org/11 http://federated-learning.org/rseml2021/
                      2)  https://aisec.cc/
                      3)  https://www.kdd.org/kdd2014/program.html
                      4)  https://www.aaai.org/Workshops/ws16workshops.php#ws03
                      5)  https://www.kaggle.com/c/nips-2017-defense-against-adversarial-attack
                      6)  https://www.crowdai.org/challenges/nips-2018-adversarial-vision-challenge-robust-model-track
                      7)  https://sites.google.com/view/advml
                      8)  https://tianchi.aliyun.com/competition/entrance/231745/introduction
                      9)  https://s.alibaba.com/conference
                      10)  https://mlhat.org/
                      11)  http://federated-learning.org/rseml2021/
                      12)  12 http://pdfrate.com/
                      13)  13 http://contagiodump.blogspot.de/2010/08/malicious-documents-archive-for.html
                      14)  14 BFGS是一種擬牛頓法(Quasi-Newton method), 主要思想是用BFGS矩陣作為擬牛頓法中的對稱正定迭代矩陣, 此法由C.G.Broyden、R.Fletcher、D.Goldfarb以及D.F.Shanno等四位研究者于1970年前后提出, 并以其名字首字母命名而來.15 C&W 算法以提出該算法的 Carlini 和 Wagner 兩位作者名字命名而來.
                      15)  C&W算法以提出該算法的Carlini和Wagner兩位作者名字命名而來.
                      16)  16 https://www.virustotal.com/gui/home/upload
                      17)  17 鯡魚(Herring)是一種常見的食用魚. 鯡魚肉經過煙熏后會變成紅色并帶有強烈氣味, 人們便把煙熏好的紅鯡魚(Red Herring)放在有狐貍出沒的地方, 用于測試獵犬的搜尋能力. 因此, 紅鯡魚也用于表示“轉移注意力的事物”.
                      18)  18 油蛙攻擊起源于19世紀末美國康奈爾大學科學家開展的“水煮青蛙實驗”, 主要原理是通過多次微小攻擊達到投毒攻擊的目標.
                      19)  19 http://www.mlsec.org/malheur/
                    • 圖  1  混淆代碼經過解碼被還原為原始代碼[59]

                      Fig.  1  The obfuscation codes to the decoded codes[59]

                      圖  2  網絡空間防御中的對抗攻擊與防御措施

                      Fig.  2  Adversarial attack and defense methods for cyberspace defense

                      圖  3  模仿攻擊(上圖)和反向模仿攻擊(下圖)[34]

                      Fig.  3  Mimicry attacks (top) and reverse mimicry attacks (bottom)[34]

                      圖  4  原始PDF文件(左圖)和修改后的PDF文件(右圖)

                      Fig.  4  The original PDF file (left) and modified PDF file (right)

                      圖  5  跨模型遷移矩陣[110]

                      Fig.  5  The cross-model transferability matrix[110]

                      圖  6  基于強化學習的惡意軟件規避框架

                      Fig.  6  A framework of malware evasion based on reinforcement learning

                      圖  7  針對質心異常檢測的投毒攻擊

                      Fig.  7  The illustration of poisoning attack for centroid anomaly detection

                      圖  8  單連接分層聚類的橋接攻擊

                      Fig.  8  Bridge-based attacks against single-linkage clustering

                      圖  9  模型提取攻擊

                      Fig.  9  Model extraction attacks

                      圖  10  用于成員推斷攻擊的影子模型

                      Fig.  10  Shadow models for membership inference

                      圖  11  SISA訓練示意圖[132]

                      Fig.  11  The illustration of SISA training[132]

                      圖  12  圖神經網絡對抗攻擊[162]

                      Fig.  12  Adversarial attacks on graph neural networks[162]

                      表  1  對抗機器學習相關綜述

                      Table  1  Related surveys about adversarial machine learning

                      區分相關文獻主要內容發表年限
                      機器學習模型SoK: Security and privacy in machine learning[16]分析機器學習模型的攻擊面, 系統論述機器學習模型在訓練和推斷過程中可能遭受的攻擊以及防御措施.2018
                      Wild patterns: Ten years after the rise of adversarial machine learning[8]系統揭示對抗機器學習演進路線, 內容涵蓋計算機視覺以及網絡安全等領域.2018
                      A survey on security threats and defensive techniques of machine learning: A data driven view[13]從數據驅動視角論述機器學習的對抗攻擊和防御問題.2018
                      The security of machine learning in an adversarial setting: A survey[14]論述對抗環境下, 機器學習在訓練和推斷/測試階段遭受的攻擊, 提出相應的安全評估機制和對應的防御策略.2019
                      A taxonomy and survey of attacks against machine learning[15]論述機器學習應用于不同領域時的對抗攻擊, 主要包括入侵檢測、垃圾郵件過濾、視覺檢測等領域.2019
                      機器學習模型安全與隱私研究綜述[17]從數據安全、模型安全以及模型隱私三個角度對現有的攻擊和防御研究進行系統總結和歸納.2021
                      機器學習安全攻擊與防御機制研究進展和未來挑戰[12]基于攻擊發生的位置和時序對機器學習安全和隱私攻擊進行分類, 并對現有攻擊方法和安全防御機制進行介紹.2021
                      深度學
                      習模型
                      Survey of attacks and defenses on edge-deployed neural networks[19]論述邊緣神經網絡的攻擊與防御.2019
                      Adversarial examples in modern machine learning: A review[20]論述對抗樣本生成與防御技術.2019
                      A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and Interpretability[21]論述深度神經網絡的安全與可解釋.2020
                      對抗樣本生成技術綜述[22]圍繞前傳、起源和發展三個階段對對抗樣本進行綜述.2020
                      機器學習隱私機器學習的隱私保護研究綜述[18]著重論述機器學習的隱私保護技術.2020
                      A survey of privacy attacks in machine learning[23]論述機器學習中隱私攻擊與保護技術.2020
                      機器學習隱私保護研究綜述[24]著重論述機器學習的隱私保護技術.2020
                      計算機視覺Threat of adversarial attacks on deep learning in computer vision: A survey[25]論述計算機視覺中深度學習模型的攻擊與防御.2018
                      Adversarial machine learning in image classification: A survey towards the defender’s perspective[26]從防御角度研究計算機視覺分類問題中的對抗機器學習.2020
                      Adversarial examples on object recognition: A comprehensive survey[27]論述神經網絡在視覺領域應用時, 存在的對抗樣本的攻防問題.2020
                      Adversarial attacks on deep learning models of computer vision: A survey[28]論述計算機視覺中深度學習模型的對抗攻擊.2020
                      自然語言處理Adversarial attacks on deep-learning models in natural language processing[29]論述自然語言處理領域中深度學習模型的對抗攻擊與防御問題.2020
                      生物醫療領域Adversarial biometric recognition: A review on biometric system security from the adversarial machine-learning perspective[30]首次從對抗機器學習角度論述生物識別系統的安全問題.2015
                      Toward an understanding of adversarial examples in clinical trials[31]論述基于深度學習模型的臨床實驗中的對抗樣本問題.2018
                      Secure and robust machine learning for healthcare: A Survey[32]從對抗機器學習的角度概述醫療保健領域中機器學習應用的現狀、挑戰及解決措施. 2021
                      網絡空間防御Adversarial attacks against intrusion detection systems: Taxonomy, solutions and open issues[33]論述入侵檢測系統中的對抗攻擊問題以及應對措施.2013
                      Towards adversarial malware detection: Lessons learned from PDF-based attacks[34]論述基于機器學習的惡意便攜式文檔格式(Portable Document Format, PDF)文件檢測系統可能遭受的對抗攻擊.2019
                      下載: 導出CSV

                      表  2  對抗機器學習時間線

                      Table  2  A timeline of adversarial machine learning history

                      年份主要內容
                      2004Dalvi等人[43]及之后的Lowd和Meek[44, 45]研究了垃圾郵件檢測中的對抗問題, 提出線性分類模型可能被精心設計的對抗樣本所愚弄.
                      2006Barreno等人[9]從更廣泛的角度質疑機器學習模型在對抗環境中的適用性問題, 并提出一些可行措施來消除或降低這些威脅.
                      2007NeurIPS舉辦Machine Learning in Adversarial Environments for Computer Security研討會.2010年, Machine Learning期刊為該研討會設立同名專題[48].
                      2008CCS舉辦首屆人工智能與安全研討會AISec(Workshop on Artificial Intelligence and Security), 并且持續舉辦至2020年.
                      2012面向計算機安全的機器學習方法達堡展望研討會(Dagstuhl Perspectives Workshop on Machine Learning Methods for Computer Security), 探討對抗學習和基于學習的安全技術面臨的挑戰和未來研究方向[49].
                      2014SIGKDD舉辦安全與隱私特別論壇.
                      2016AAAI舉辦面向網絡空間安全的人工智能研討會AICS(Artificial Intelligence for Cyber Security), 此后至2019年每年舉辦一屆.
                      2017為促進對抗樣本的相關研究, 谷歌大腦(Google Brain)在NeurIPS2017上舉辦對抗攻擊與防御挑戰賽.
                      2018NeurIPS2018舉辦對抗視覺挑戰賽, 目的是促進更加魯棒的機器視覺模型和更為廣泛可用的對抗攻擊.
                      Yevgeniy等人[7]撰寫書籍Adversarial Machine Learning, 并由Morgan & Claypool出版社發行.
                      2019Joseph等人[6]撰寫書籍Adversarial Machine Learning, 并由劍橋大學出版社發行.
                      論文Adversarial attacks on medical machine learning[50]在《科學》(Science)期刊上發表, 指出醫療機器學習中出現新脆弱性問題, 需要新舉施.
                      論文Why deep-learning AIs are so easy to fool[51]在《自然》(Nature)期刊上發表, 探討深度學習遭受對抗攻擊時的魯棒性.
                      KDD2019舉辦首屆面向機器學習和數據挖掘的對抗學習方法研討會, 至今已連續舉辦兩屆.
                      清華大學和阿里安全于天池競賽平臺聯合舉辦“安全AI挑戰者計劃”, 至今已有五期.同時, 每年底舉辦“AI與安全研討會”, 至今已連續舉辦兩屆.
                      2020KDD2020舉辦首屆面向安全防御的可部署機器學習國際研討會(Workshop on Deployable Machine Learning for Security Defense).
                      2021AAAI2021舉辦魯棒、安全、高效的機器學習國際研討會(Towards Robust, Secure and Efficient Machine Learning).?
                      *注: 表格中數據更新至2021年2月8日.
                      下載: 導出CSV

                      表  3  基于威脅建模的機器學習攻擊分類

                      Table  3  Classfication of attacks against machine learning based on threat model

                      敵手目標敵手知識
                      模型完整性模型可用性隱私竊取
                      敵手能力測試數據規避攻擊模型提取
                      模型反演
                      成員推斷
                      白盒攻擊
                      黑盒攻擊
                      訓練數據投毒攻擊(后門攻擊)投毒攻擊(油蛙攻擊)模型反演
                      成員推斷
                      白盒攻擊
                      黑盒攻擊
                      下載: 導出CSV

                      表  4  網絡空間防御中的典型對抗攻擊

                      Table  4  Typical adversarial attacks for cyberspace defense

                      攻擊方法相關論文應用領域特點
                      規避攻擊基于模仿的規避攻擊[43, 45, 63-65]垃圾郵件檢測模仿攻擊采用啟發式算法, 嘗試向惡意文件中添加良性特征或者向良性文件中注入惡意特征, 從而實現規避.
                      [66]流量分析
                      [67]惡意軟件檢測
                      [68-75]惡意PDF文件分類
                      基于梯度的規避攻擊[76, 77]惡意PDF文件分類基于梯度的規避攻擊利用梯度下降求解優化問題, 對輸入樣本執行細粒度的修改, 以最小化(最大化)樣本被歸類為惡意(良性)的概率.
                      [10, 78, 79]惡意軟件檢測
                      [80, 81]入侵檢測
                      基于遷移的規避攻擊[70, 82]惡意PDF文件分類基于遷移的規避攻擊主要利用了對抗樣本的跨模型遷移性, 可以應用于無法獲取模型梯度的各種攻擊場景.
                      [83-85]入侵檢測
                      [86]XSS檢測
                      [87]域名生成
                      [88-90]惡意軟件檢測
                      投毒攻擊可用性攻擊[9, 45, 91-93]垃圾郵件檢測可用性攻擊的目的是增加測試階段的分類誤差, 從而造成拒絕服務.
                      [94, 95]入侵檢測
                      完整性攻擊[96, 97]異常檢測完整性攻擊的目的是使得惡意軟件特定子集被模型誤分類.
                      [98, 99]惡意軟件檢測
                      隱私竊取模型提取攻擊[100]隱私竊取主要目的是竊取機器學習模型或訓練數據的信息.
                      模型反演攻擊[101, 102]
                      成員推斷攻擊[103, 104]
                      下載: 導出CSV

                      表  5  網絡空間防御中用于對抗攻擊的典型防御措施

                      Table  5  Typical defense against adversarial attacks for cyberspace defense

                      防御措施相關文獻應用場景簡述
                      規避防御數據降維[118, 119]垃圾郵件檢測可以有效防御對抗攻擊, 但模型對正常樣本的精度可能降低.
                      [118, 120]惡意軟件檢測
                      魯棒優化[121-124]惡意軟件檢測基本思想是模型在訓練時存在“盲點”, 將構造的對抗樣本注入訓練集, 以提高模型的泛化能力.
                      防御蒸餾[125, 126]惡意軟件檢測難以防御C&W攻擊方法.
                      投毒防御數據清洗[127]異常檢測該方法將投毒攻擊視為離群值進行處理.
                      [128-132]
                      博弈論[133-137]垃圾郵件檢測該方法將博弈論的思想用于處理垃圾郵件的投毒攻擊.
                      隱私保護差分隱私[138-142]該方法的難點在于如何平衡模型可用性與隱私保護效果.
                      模型壓縮[110]該方法可用于緩解成員推斷攻擊.
                      模型集成[143]該方法的主要思想是將模型中低于特定閾值的損失梯度設為零, 可以用于防御模型提取攻擊.
                      下載: 導出CSV
                      360彩票
                    • [1] 搜狐. 美國東海岸斷網事件主角Dyn關于DDoS攻擊的后果. [Online], available: https://www.sohu.com/a/117078005_257305, October 25, 2016
                      [2] 搜狐. WannaCry勒索病毒事件分析. [Online], available: https://www.sohu.com/a/140863167_244641, May 15, 2017
                      [3] 彭志藝, 張衠, 惠志斌, 覃慶玲. 中國網絡空間安全發展報告(2019版). 北京: 社會科學文獻出版社, 2019

                      Peng Zhi-Yi, Zhang Zhun, Hui Zhi-Bin, Tan Qing-Ling. Annual Report on the Development of Cyberspace Security in China(2019). Beijing: Social Sciences Academic Press, 2019
                      [4] 張蕾, 崔勇, 劉靜, 江勇, 吳建平. 機器學習在網絡空間安全研究中的應用. 計算機學報, 2018, 41 (9): 1943-1975 doi: 10.11897/SP.J.1016.2018.01943

                      Zhang Lei, Cui Yong, Liu Jing, Jiang Yong, Wu Jian-Ping. Application of machine learning in cyberspace security research. Chinese Journal of Computers, 2018, 41(9): 1943-1975 doi: 10.11897/SP.J.1016.2018.01943
                      [5] 中共中央網絡安全和信息化委員會辦公室. 將“關口前移”要求落到實處. [Online], available: http://www.cac.gov.cn/2018-04/30/c_1122765347.htm, April 30, 2018
                      [6] Joseph A D, Nelson B, Rubinstein B I P, Tygar J D. Adversarial machine learning. Cambridge: Cambridge University Press, 2019.
                      [7] Yevgeniy V, Murat K. Adversarial machine learning. San Rafael: Morgan & Claypool Publishers, 2018.
                      [8] Biggio B, Roli F. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 2018, 84: 317-331 doi: 10.1016/j.patcog.2018.07.023
                      [9] Barreno M, Nelson B, Sears R, Joseph A D, Tygar J D. Can machine learning be secure? In: Proceedings of the 2006 ACM Symposium on Information, computer and communications security. Taipei, Taiwan, China: ACM, 2006. 16–25
                      [10] Grosse K, Papernot N, Manoharan P, Backes M, Mcdaniel P. Adversarial examples for malware detection. In: Proceedings of the 22nd European Symposium on Research in Computer Security. Oslo, Norway: Springer, 2017. 62−79
                      [11] Biggio B, Fumera G, Roli F. Pattern recognition systems under attack: Design issues and research challenges. International Journal of Pattern Recognition and Artificial Intelligence, 2014, 28(7): Article No. 1460002 doi: 10.1142/S0218001414600027
                      [12] 李欣姣, 吳國偉, 姚琳, 張偉哲, 張賓. 機器學習安全攻擊與防御機制研究進展和未來挑戰. 軟件學報, 2021, 32(2): 406-423

                      Li Xin-Jiao, Wu Guo-Wei, Yao Lin, Zhang Wei-Zhe, Zhang Bin. Progress and future challenges of security attacks and defense mechanisms in machine learning. Journal of Software, 2021, 32(2): 406?423
                      [13] Liu Q, Li P, Zhao W, Cai W, Yu S, Leung V C M. A survey on security threats and defensive techniques of machine learning: A data driven view. IEEE Access, 2018, 6: 12103-12117 doi: 10.1109/ACCESS.2018.2805680
                      [14] Wang X, Li J, Kuang X, Tan Y-A. The security of machine learning in an adversarial setting: A survey. Journal of Parallel Distributed Computing, 2019, 130: 12-23 doi: 10.1016/j.jpdc.2019.03.003
                      [15] Pitropakis N, Panaousis E, Giannetsos T, Anastasiadis E, Loukas G. A taxonomy and survey of attacks against machine learning. Computer Science Review, 2019, 34: Article No. 100199 doi: 10.1016/j.cosrev.2019.100199
                      [16] Papernot N, Mcdaniel P, Sinha A, Wellman M P. Sok: Security and privacy in machine learning. In: Proceedings of the 3rd IEEE European Symposium on Security and Privacy. London, UK: IEEE, 2018. 399?414
                      [17] 紀守領, 杜天宇, 李進鋒, 沈超, 李博. 機器學習模型安全與隱私研究綜述. 軟件學報, 2021, 32(1): 41-67

                      Ji Shou-Ling, Du Tian-Yu, Li Jin-Feng, Shen Chao, Li Bo. Security and privacy of machine learning models: A survey. Journal of Software, 2021, 32(1): 41-67
                      [18] 劉俊旭, 孟小峰. 機器學習的隱私保護研究綜述. 計算機研究與發展, 2020, 57(2): 346-362 doi: 10.7544/issn1000-1239.2020.20190455

                      Liu Jun-Xu, Meng Xiao-Feng. Survey on privacy-preserving machine learning. Journal of Computer Research and Development, 2020, 57(2): 346-362. doi: 10.7544/issn1000-1239.2020.20190455
                      [19] Isakov M, Gadepally V, Gettings K, Kinsy M. Survey of attacks and defenses on edge-deployed neural networks. In: Proceedings of the 2019 IEEE High Performance Extreme Computing Conference. Waltham, MA, USA: IEEE, 2019. 1?8
                      [20] Wiyatno R, Xu A, Dia O, Berker A D. Adversarial examples in modern machine learning: A review. ArXiv: 1911.05268, 2019.
                      [21] Huang X, Kroening D, Ruan W, Sun Y, Thamo E, Wu M, et al. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 2020, 37: 100270 doi: 10.1016/j.cosrev.2020.100270
                      [22] 潘文雯, 王新宇, 宋明黎, 陳純. 對抗樣本生成技術綜述. 軟件學報, 2020, 31(1): 67-81

                      Pan Wen-Wen, Wang Xin-Yu, Song Ming-Li, Chen Chun. Survey on generating adversarial examples. Journal of Software, 2020, 31(1): 67-81
                      [23] Rigaki M, García S. A survey of privacy attacks in machine learning. ArXiv: 2007.07646, 2020.
                      [24] 譚作文, 張連福. 機器學習隱私保護研究綜述. 軟件學報, 2020, 31(7): 2127-2156

                      Tan Zuo-Wen, Zhang Lian-Fu. Survey on privacy preserving techniques for machine learning. Journal of Software, 2020, 31(7): 2127-2156
                      [25] Akhtar N, Mian A. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 2018, 6: 14410-14430 doi: 10.1109/ACCESS.2018.2807385
                      [26] Machado G R, Silva E, Goldschmidt R R. Adversarial machine learning in image classification: A survey towards the defender's perspective. ArXiv: 2009.03728, 2020.
                      [27] Serban A, Poll E, Visser J. Adversarial examples on object recognition: A comprehensive survey. ACM Computing Surveys, 2020, 53(3): Article No. 66
                      [28] Ding J, Xu Z. Adversarial attacks on deep learning models of computer vision: A survey. In: Proceedings of the 20th International Conference on Algorithms and Architectures for Parallel Processing. New York, NY, USA: Springer, 2020. 396?408
                      [29] Zhang W, Sheng Q Z, Alhazmi A, Li C. Adversarial attacks on deep-learning models in natural language processing. ACM Transactions on Intelligent Systems and Technology, 2020, 11(3): 1-41
                      [30] Biggio B, Fumera G, Russu P, Didaci L, Roli F. Adversarial biometric recognition: A review on biometric system security from the adversarial machine-learning perspective. IEEE Signal Processing Magazine, 2015, 32(5): 31-41 doi: 10.1109/MSP.2015.2426728
                      [31] Papangelou K, Sechidis K, Weatherall J, Brown G. Toward an understanding of adversarial examples in clinical trials. In: Proceedings of the 2018 European Conference on Machine Learning and Knowledge Discovery in Databases. Dublin, Ireland: Springer, 2018. 35?51
                      [32] Qayyum A, Qadir J, Bilal M, Al-Fuqaha A. Secure and robust machine learning for healthcare: A survey. IEEE Reviews in Biomedical Engineering, 2021, 14: 156-180 doi: 10.1109/RBME.2020.3013489
                      [33] Corona I, Giacinto G, Roli F. Adversarial attacks against intrusion detection systems: Taxonomy, solutions and open issues. Information Sciences, 2013, 239: 201-225 doi: 10.1016/j.ins.2013.03.022
                      [34] Maiorca D, Biggio B, Giacinto G. Towards adversarial malware detection: Lessons learned from PDF-based attacks. ACM Computing Surveys, 2019, 52(4): Article No. 78
                      [35] Army U S G U. Joint publication 3-12: Cyberspace operations. North Charleston, SC, USA: Create Space Independent Publishing Platform, 2018.
                      [36] Gibson W. Neuronmancer. New York: Ace Books, 1984.
                      [37] The White House. Defending America’s cyberspace: National plan for information systems protection. NCJ Number 189910, US Executive Office of the President, Washington, USA, 2000
                      [38] 中華人民共和國國家互聯網信息辦公室. 國家網絡空間安全戰略. [Online], available: http://www.cac.gov.cn/2016-12/27/c_1120195926.htm, December 27, 2016
                      [39] National Institute of Standards and Technology. Framework for improving critical infrastructure cybersecurity version 1.1. [Online], available: https://www.nist.gov/publications/framework-improving-critical-infrastructure-cybersecurity-version-11, April 16, 2018
                      [40] Turing A M. Computing machinery and intelligence. Mind, 1950, 59(236): 433-460
                      [41] Samuel A L. Some studies in machine learning using the game of checkers. IBM Journal of Research and Development, 1959, 3(3): 211-229
                      [42] Mohri M, Rostamizadeh A, Talwalkar A. Foundations of machine learning. London: MIT Press, 2012.
                      [43] Dalvi N, Domingos P, Sumit M, Verma S D. Adversarial classification. In: Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Seattle, WA, USA: ACM, 2004. 99?108
                      [44] Lowd D, Meek C. Adversarial learning. In: Proceedings of the 11th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Chicago, IL, USA: ACM, 2005. 641?647
                      [45] Lowd D, Meek C. Good word attacks on statistical spam filters. In: The 2nd Conference on Email and Anti-Spam. Stanford, CA, USA: 2005.
                      [46] Barreno M, Nelson B, Joseph A D, Tygar J D. The security of machine learning. Machine Learning, 2010, 81(2): 121-148 doi: 10.1007/s10994-010-5188-5
                      [47] Dasgupta P, Collins J B. A survey of game theoretic approaches for adversarial machine learning in cybersecurity tasks. AI Magazine, 2019, 40(2): 31-43 doi: 10.1609/aimag.v40i2.2847
                      [48] Laskov P, Lippmann R. Machine learning in adversarial environments. Machine Learning, 2010, 81(2): 115-119 doi: 10.1007/s10994-010-5207-6
                      [49] Joseph A, Laskov P, Roli F, Tygar J, Nelson B. Machine learning methods for computer security. Dagstuhl Reports, 2012, 2: 109-130
                      [50] Finlayson S G, Bowers J D, Ito J, Zittrain J L, Beam A L, Kohane I S. Adversarial attacks on medical machine learning. Science, 2019, 363(6433): 1287-1289 doi: 10.1126/science.aaw4399
                      [51] Heaven D. Why deep-learning ais are so easy to fool. Nature, 2019, 574: 163-166 doi: 10.1038/d41586-019-03013-5
                      [52] Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, et al. Intriguing properties of neural networks. In: The 2nd International Conference on Learning Representations. Banff, AB, Canada: 2014.
                      [53] Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. In: The 3rd International Conference on Learning Representations. San Diego, CA, USA: 2015.
                      [54] Li X, Li F. Adversarial examples detection in deep networks with convolutional filter statistics. In: Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 5775?5783
                      [55] Lu J, Issaranon T, Forsyth D. Safetynet: Detecting and rejecting adversarial examples robustly. In: Proceedings of the 16th IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 446?454
                      [56] Meng D, Chen H. MagNet: A two-pronged defense against adversarial examples. In: Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. Dallas, TX, USA: ACM, 2017. 135?147
                      [57] Melis M, Demontis A, Biggio B, Brown G, Fumera G, Roli F. Is deep learning safe for robot vision? Adversarial examples against the iCub humanoid. In: Proceedings of the 16th IEEE International Conference on Computer Vision Workshops. Venice, Italy: IEEE, 2017. 751?759
                      [58] Papernot N, Mcdaniel P, Wu X, Jha S, Swami A. Distillation as a defense to adversarial perturbations against deep neural networks. In: Proceedings of the 2016 IEEE Symposium on Security and Privacy. San Jose, CA, USA: IEEE, 2016. 582?597
                      [59] 程琪芩, 萬良. BiLSTM在跨站腳本檢測中的應用研究. 計算機科學與探索, 2020, 14(8): 1338-1347 doi: 10.3778/j.issn.1673-9418.1909035

                      Cheng Qi-Qian, Wan Liang. Application research of BiLSTM in cross-site scripting detection. Journal of Frontiers of Computer Science and Technology, 2020, 14(8): 1338-1347 doi: 10.3778/j.issn.1673-9418.1909035
                      [60] Biggio B, Fumera G, Roli F. Security evaluation of pattern classifiers under attack. IEEE Transactions on Knowledge and Data Engineering, 2014, 26: 984-996 doi: 10.1109/TKDE.2013.57
                      [61] Kerckhoffs A. La cryptographie militaire. Journal des Sciences Militaires, 1883, 9: 5-83
                      [62] 范蒼寧, 劉鵬, 肖婷, 趙巍, 唐降龍. 深度域適應綜述: 一般情況與復雜情況. 自動化學報, 2021, 47(3): 515-548

                      Fan Cang-Ning, Liu Peng, Xiao Ting, Zhao Wei, Tang Xiang-Long. A review of deep domain adaptation: General situation and complex situation. Acta Automatica Sinica, 2021, 47(3): 515?548
                      [63] Wittel G L, Wu S F. On attacking statistical spam filters. In: The 1st Conference on Email and Anti-spam. Mountain View, CA, USA: 2004. 1?7
                      [64] Liu C, Stamm S. Fighting unicode-obfuscated spam. In: Proceedings of the Anti-Phishing Working Groups 2nd Annual eCrime Researchers Summit. Pittsburgh, PA, USA: ACM, 2007. 45?59
                      [65] Sculley D, Wachman G M, Brodley C E. Spam filtering using inexact string matching in explicit feature space with on-line linear classifiers. In: The 15th Text REtrieval Conference. Gaithersburg, MD, USA: 2006. 1?10
                      [66] Wright C V, Coull S E, Monrose F. Traffic morphing: An efficient defense against statistical traffic analysis. In: Proceedings of the 16th Annual Network and Distributed System Security Symposium. San Diego, CA, USA: ISOC, 2009. 237–250
                      [67] Rosenberg I, Shabtai A, Rokach L, Elovici Y. Generic black-box end-to-end attack against state of the art API call based malware classifiers. In: The 21st International Symposium on Research in Attacks, Intrusions and Defenses. Heraklion, Greece: 2018. 490?510
                      [68] Smutz C, Stavrou A. Malicious PDF detection using metadata and structural features. In: Proceedings of the 28th Annual Computer Security Applications Conference. Orlando, Florida, USA: ACM, 2012. 239–248
                      [69] ?rndi? N, Laskov P. Detection of malicious PDF files based on hierarchical document structure. In: Proceedings of the 20th Annual Network and Distributed System Security Symposium. San Diego, CA, USA: ISOC, 2013. 1?16
                      [70] ?rndi? N, Laskov P. Practical evasion of a learning-based classifier: A case study. In: Proceedings of the 35th IEEE Symposium on Security and Privacy. San Jose, CA, USA: IEEE, 2014. 197?211
                      [71] Suciu O, Coull S E, Johns J. Exploring adversarial examples in malware detection. In: Proceedings of the 2019 IEEE Security and Privacy Workshops. San Francisco, CA, USA: IEEE, 2019. 8?14
                      [72] Corona I, Maiorca D, Ariu D, Giacinto G. Lux0R: Detection of malicious PDF-embedded javascript code through discriminant analysis of API references. In: Proceedings of the 2014 ACM Artificial Intelligent and Security Workshop. Scottsdale, AZ, USA: ACM, 2014. 47?57
                      [73] Maiorca D, Corona I, Giacinto G. Looking at the bag is not enough to find the bomb: An evasion of structural methods for malicious PDF files detection. In: Proceedings of the 8th ACM SIGSAC Symposium on Information, Computer and Communications Security. Hangzhou, China: ACM, 2013. 119–130
                      [74] Xu W, Qi Y, Evans D. Automatically evading classifiers: A case study on PDF malware classifiers. In: Proceedings of the 23rd Annual Network and Distributed System Security Symposium. San Diego, CA, USA: ISOC, 2016. 1?15
                      [75] Biggio B, Corona I, Maiorca D, Nelson B, Srndic N, Laskov P, et al. Evasion attacks against machine learning at test time. In: Proceedings of the 2013 European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. Prague, Czech: Springer, 2013. 387?402
                      [76] Smutz C, Stavrou A. When a tree falls: Using diversity in ensemble classifiers to identify evasion in malware detectors. In: Proceedings of the 23rd Annual Network and Distributed System Security Symposium. San Diego, CA, USA: ISOC, 2016. 1?15
                      [77] Biggio B, Corona I, Nelson B, Rubinstein B I P, Maiorca D, Fumera G, et al. Security evaluation of support vector machines in adversarial environments //Ma Y, Guo G. Support vector machines applications. Cham: Springer International Publishing, 2014. 105?153
                      [78] Kolosnjaji B, Demontis A, Biggio B, Maiorca D, Giacinto G, Eckert C, et al. Adversarial malware binaries: Evading deep learning for malware detection in executables. In: Proceedings of the 26th European Signal Processing Conference. Rome, Italy: EUSIPCO, 2018. 533?537
                      [79] Kreuk F, Barak A, Aviv-Reuven S, Baruch M, Pinkas B, Keshet J. Adversarial examples on discrete sequences for beating whole-binary malware detection. ArXiv: 1802.04528, 2018.
                      [80] Huang C-H, Lee T-H, Chang L-H, Lin J-R, Horng G. Adversarial attacks on SDN-based deep learning IDS system. In: Proceedings of the 2018 International Conference on Mobile and Wireless Technology. Kowloon, Hong kong: Springer, 2019. 181?191
                      [81] Clements J, Yang Y, Sharma A A, Hu H, Lao Y. Rallying adversarial techniques against deep learning for network security. ArXiv: 1903.11688, 2019.
                      [82] Dang H, Huang Y, Chang E-C. Evading classifiers by morphing in the dark. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. Dallas, TX, USA: ACM, 2017. 119?133
                      [83] Lin Z, Shi Y, Xue Z. IDSGAN: Generative adversarial networks for attack generation against intrusion detection. ArXiv: 1809.02077, 2018.
                      [84] Rigaki M, Garcia S. Bringing a GAN to a knife-fight: Adapting malware communication to avoid detection. In: Proceedings of the 2018 IEEE Symposium on Security and Privacy Workshops. San Francisco, CA, USA: IEEE, 2018. 70?75
                      [85] Yan Q, Wang M, Huang W, Luo X, Yu F R. Automatically synthesizing DoS attack traces using generative adversarial networks. International Journal of Machine Learning and Cybernetics, 2019, 10(12): 3387-3396 doi: 10.1007/s13042-019-00925-6
                      [86] Fang Y, Huang C, Xu Y, Li Y. RLXSS: Optimizing XSS detection model to defend against adversarial attacks based on reinforcement learning. Future Internet, 2019, 11: 177 doi: 10.3390/fi11080177
                      [87] Anderson H S, Woodbridge J, Filar B. DeepDGA: Adversarially-tuned domain generation and detection. In: Proceedings of the 9th ACM Workshop Artificial Intelligence and Security. Vienna, Austria: ACM, 2016. 13?21
                      [88] Hu W, Tan Y. Generating adversarial malware examples for black-box attacks based on GAN. ArXiv: 1702.05983, 2017.
                      [89] Anderson H S, Kharkar A, Filar B, Evans D, Roth P. Learning to evade static PE machine learning malware models via reinforcement learning. ArXiv: 1801.08917, 2018.
                      [90] 唐川, 張義, 楊岳湘, 施江勇. DroidGAN: 基于DCGAN的Android對抗樣本生成框架. 通信學報, 2018, 39(S1): 64-69

                      Tang Chuan, Zhang Yi, Yang Yue-Xiang, Shi Jiang-Yong. DroidGAN: Android adversarial sample generation framework based on DCGAN. Journal on Communications, 2018, 39(S1): 64-69
                      [91] Nelson B, Barreno M, Chi F J, Joseph A D, Rubinstein B I P, Saini U, et al. Exploiting machine learning to subvert your spam filter. In: Proceedings of the 1st USENIX Workshop on Large-Scale Exploits and Emergent Threats: Botnets, Spyware, Worms, and More. San Francisco, CA, USA: USENIX Association, 2008. 1?9
                      [92] Newsome J, Karp B, Song D X. Paragraph: Thwarting signature learning by training maliciously. In: Proceedings of the 9th International Symposium on Recent Advances in Intrusion Detection. Hamburg, Germany: Springer, 2006. 81?105
                      [93] Huang L, Joseph A D, Nelson B, Rubinstein B I P, Tygar J D. Adversarial machine learning. In: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence. New York, NY, USA: ACM, 2011. 43–58
                      [94] Kim H A, Karp B, Usenix. Autograph: Toward automated, distributed worm signature detection. In: Proceedings of the 13th USENIX Security Symposium. San Diego, CA, USA: USENIX Association, 2004. 271?286
                      [95] Rubinstein B I P, Nelson B, Huang L, Joseph A D, Lau S-H, Rao S, et al. Antidote: Understanding and defending against poisoning of anomaly detectors. In: Proceedings of the 9th ACM SIGCOMM Conference on Internet Measurement. Chicago, IL, USA: ACM, 2009. 1?14
                      [96] Nelson B, Joseph A D. Bounding an attack's complexity for a simple learning model. In: Proceedings of the 1st USENIX Workshop on Tackling Computer Systems Problems with Machine Learning Techniques. Saint-Malo, France: USENIX, 2006. 1?5
                      [97] Kloft M, Laskov P. Online anomaly detection under adversarial impact. In: Proceedings of the 13th International Conference on Artificial Intelligence and Statistics. Sardinia, Italy: Microtome, 2010. 405?412
                      [98] Biggio B, Pillai I, Rota Bulo S, Ariu D, Pelillo M, Roli F. Is data clustering in adversarial settings secure? In: Proceedings of the 6th Annual ACM Workshop on Artificial Intelligence and Security. Berlin, Germany: ACM, 2013. 87?97
                      [99] Biggio B, Rieck K, Ariu D, Wressnegger C, Corona I, Giacinto G, et al. Poisoning behavioral malware clustering. In: Proceedings of the 7th ACM Workshop Artificial Intelligence and Security. Scottsdale, AZ, USA: ACM, 2014. 27?36
                      [100] Tramèr F, Zhang F, Juels A, Reiter M K, Ristenpart T. Stealing machine learning models via prediction APIs. In: Proceedings of the 25th USENIX Security Symposium. Austin, TX, USA: USENIX Association, 2016. 601?618
                      [101] Fredrikson M, Jha S, Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. Denver, CO, USA: ACM, 2015. 1322?1333
                      [102] Papernot N, Mcdaniel P D, Goodfellow I J, Jha S, Celik Z B, Swami A. Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security. Abu Dhabi, UAE: ACM, 2017. 506?519
                      [103] Fredrikson M, Lantz E, Jha S, Lin S, Page D, Ristenpart T. Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In: Proceedings of the 23rd USENIX Security Symposium. San Diego, CA, USA: USENIX Association, 2014. 17?32
                      [104] Shokri R, Stronati M, Song C, Shmatikov V. Membership inference attacks against machine learning models. In: Proceedings of the 2017 IEEE Symposium on Security and Privacy. San Jose, CA, USA: IEEE, 2017. 3?18
                      [105] Maiorca D, Giacinto G, Corona I. A pattern recognition system for malicious PDF files detection. In: Proceedings of the 8th International Conference on Machine Learning and Data Mining in Pattern Recognition. Berlin, Germany: Springer, 2012. 510?524
                      [106] Papernot N, Mcdaniel P D, Jha S, Fredrikson M, Celik Z B, Swami A. The limitations of deep learning in adversarial settings. In: Proceedings of the 2016 IEEE European Symposium on Security and Privacy. Saarbruecken, Germany: IEEE, 2016. 372?387
                      [107] Carlini N, Wagner D A. Towards evaluating the robustness of neural networks. In: Proceedings of the 2017 IEEE Symposium on Security and Privacy. San Jose, CA, USA: IEEE, 2017. 39?57
                      [108] Eykholt K, Evtimov I, Fernandes E, Li B, Rahmati A, Xiao C, et al. Robust physical-world attacks on deep learning visual classification. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018. 1625?1634
                      [109] Chen P Y, Sharma Y, Zhang H, Yi J F, Hsieh C J. EAD: Elastic-net attacks to deep neural networks via adversarial examples. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence. New Orleans, LA, USA: AAAI, 2018. 10?17
                      [110] Papernot N, Mcdaniel P D, Goodfellow I J. Transferability in machine learning: From phenomena to black-box attacks using adversarial samples. ArXiv: 1605.07277, 2016.
                      [111] Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. In: Proceedings of the 28th Annual Conference on Neural Information Processing Systems. Montreal, QC, Canada: MIT Press, 2014. 2672?2680
                      [112] 王坤峰, 茍超, 段艷杰, 林懿倫, 鄭心湖, 王飛躍. 生成式對抗網絡GAN的研究進展與展望. 自動化學報, 2017, 43(03): 321-332

                      Wang Kun-Feng, Gou Chao, Duan Yan-Jie, Lin Yi-Lun, Zheng Xin-Hu, Wang Fei-Yue. Generative adversarial networks: The state of the art and beyond. Acta Automatica Sinica, 2017, 43(3): 321-332
                      [113] Kearns M, Li M. Learning in the presence of malicious errors. In: Proceedings of the 20th annual ACM Symposium on Theory of Computing. Chicago, Illinois, USA: ACM, 1988. 267–280
                      [114] John Leyden. Kaspersky Lab denies tricking AV rivals into nuking harmless files. [Online], available: https://www.theregister.co.uk/2015/08/14/kasperskygate/, August 14 2015
                      [115] Kloft M, Laskov P. Security analysis of online centroid anomaly detection. Journal of Machine Learning Research, 2012, 13: 3681-3724
                      [116] Liao C, Zhong H, Squicciarini A C, Zhu S, Miller D J. Backdoor embedding in convolutional neural network models via invisible perturbation. In: Proceedings of the 10th ACM Conference on Data and Application Security and Privacy. New Orleans, LA, USA: ACM, 2020. 97–108
                      [117] Hayes J, Melis L, Danezis G, Cristofaro E D. LOGAN: Membership inference attacks against generative models. Proceedings on Privacy Enhancing Technologies, 2019, 2019(1): 133-152 doi: 10.2478/popets-2019-0008
                      [118] Zhang F, Chan P P K, Biggio B, Yeung D S, Roli F. Adversarial feature selection against evasion attacks. IEEE Transactions on Cybernetics, 2016, 46(3): 766-77 doi: 10.1109/TCYB.2015.2415032
                      [119] Bhagoji A N, Cullina D, Sitawarin C, Mittal P. Enhancing robustness of machine learning systems via data transformations. In: Proceedings of the 52nd Annual Conference on Information Sciences and Systems. Princeton, NJ, USA: IEEE, 2018. 1?5
                      [120] Wang Q, Guo W, Zhang K, Ororbia A G, Xing X, Liu X, et al. Adversary resistant deep neural networks with an application to malware detection. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Halifax, NS, Canada: ACM, 2017. 1145–1153
                      [121] Al-Dujaili A, Huang A, Hemberg E, O’reilly U. Adversarial deep learning for robust detection of binary encoded malware. In: Proceedings of the 2018 IEEE Symposium on Security and Privacy Workshops. San Francisco, CA, USA: IEEE, 2018. 76?82
                      [122] Demontis A, Melis M, Biggio B, Maiorca D, Arp D, Rieck K, et al. Yes, machine learning can be more secure! A case study on Android malware detection. IEEE Transactions on Dependable and Secure Computing, 2019, 16(4): 711-724 doi: 10.1109/TDSC.2017.2700270
                      [123] Yang W, Kong D, Xie T, Gunter C A. Malware detection in adversarial settings: Exploiting feature evolutions and confusions in Android apps. In: Proceedings of the 33rd Annual Computer Security Applications Conference. 2017.
                      [124] Li D, Li Q. Adversarial deep ensemble: Evasion attacks and defenses for malware detection. IEEE Transactions on Information Forensics and Security, 2020, 15: 3886-3900 doi: 10.1109/TIFS.2020.3003571
                      [125] Grosse K, Papernot N, Manoharan P, Backes M, Mcdaniel P D. Adversarial perturbations against deep neural networks for malware classification. ArXiv: 1606.04435, 2016.
                      [126] Stokes J W, Wang D, Marinescu M, Marino M, Bussone B. Attack and defense of dynamic analysis-based, adversarial neural malware detection models. In: Proceedings of the 2018 IEEE Military Communications Conference. Los Angeles, CA, USA: IEEE, 2018. 102?109
                      [127] Cretu G F, Stavrou A, Locasto M E, Stolfo S J, Keromytis A D. Casting out demons: Sanitizing training data for anomaly sensors. In: Proceedings of the 2008 IEEE Symposium on Security and Privacy. Oakland, CA, USA: IEEE, 2008. 81?95
                      [128] Laishram R, Phoha V V. Curie: A method for protecting SVM classifier from poisoning attack. ArXiv: 1606.01584, 2016.
                      [129] Feinman R, Curtin R R, Shintre S, Gardner A B. Detecting adversarial samples from artifacts. ArXiv: 1703.00410, 2017.
                      [130] Steinhardt J, Koh P W, Liang P. Certified defenses for data poisoning attacks. In: Proceedings of the 31st Annual Conference on Neural Information Processing Systems. Long Beach, CA, USA: MIT Press, 2017. 3518?3530
                      [131] Metzen J H, Genewein T, Fischer V, Bischoff B. On detecting adversarial perturbations. In: The 5th International Conference on Learning Representations. Toulon, France: 2017.
                      [132] Bourtoule L, Chandrasekaran V, Choquette-Choo C A, Jia H, Travers A, Zhang B, et al. Machine unlearning. In: The 42nd IEEE Symposium on Security and Privacy. Virtual conference: 2021. 1?19
                      [133] Brückner M, Scheffer T. Nash equilibria of static prediction games. In: Proceedings of the 23rd Annual Conference on Neural Information Processing Systems. Vancouver, BC, Canada: MIT Press, 2009. 171?179
                      [134] Brückner M, Scheffer T. Stackelberg games for adversarial prediction problems. In: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Diego, CA, USA: ACM, 2011. 547?555
                      [135] Brückner M, Kanzow C, Scheffer T. Static prediction games for adversarial learning problems. Journal of Machine Learning Research, 2012, 13: 2617-2654
                      [136] Sengupta S, Chakraborti T, Kambhampati S. MTDeep: Boosting the security of deep neural nets against adversarial attacks with moving target defense. In: Proceedings of the 10th International Conference on Decision and Game Theory for Security. Stockholm, Sweden: Springer, 2019. 479?491
                      [137] Biggio B, Fumera G, Roli F. Design of robust classifiers for adversarial environments. In: Proceedings of the 2011 IEEE International Conference on Systems, Man, and Cybernetics. Anchorage, AK, USA: IEEE, 2011. 977?982
                      [138] Dwork C. Differential privacy. In: Proceedings of the 33rd International Colloquium on Automata, Languages and Programming. Venice, Italy: Springer, 2006. 1?12
                      [139] Dwork C, Mcsherry F, Nissim K, Smith A D. Calibrating noise to sensitivity in private data analysis. In: Proceedings of the 3rd Theory of Cryptography Conference. New York, NY, USA: Springer, 2006. 265?284
                      [140] Jayaraman B, Evans D. Evaluating differentially private machine learning in practice. In: Proceedings of the 28th USENIX Security Symposium. Santa Clara, CA, USA: USENIX Association, 2019. 1895?1912
                      [141] Rahman M A, Rahman T, Laganière R, Mohammed N, Wang Y. Membership inference attack against differentially private deep learning model. Transactions on Data Privacy, 2018, 11(1): 61-79
                      [142] Mcmahan H B, Ramage D, Talwar K, Zhang L. Learning differentially private recurrent language models. In: The 6th International Conference on Learning Representations. Vancouver, BC, Canada: 2018. 1?14
                      [143] Salem A, Zhang Y, Humbert M, Fritz M, Backes M. Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models. In: Proceedings of the 26th Annual Network and Distributed System Security Symposium. San Diego, CA, USA: ISOC, 2019. 1?15
                      [144] Tramèr F, Kurakin A, Papernot N, Goodfellow I, Boneh D, Mcdaniel P. Ensemble adversarial training: Attacks and defenses. In: The 6th International Conference on Learning Representations. Vancouver, BC, Canada: 2018. 1?20
                      [145] Hinton G E, Vinyals O, Dean J. Distilling the knowledge in a neural network. ArXiv: 1503.02531, 2015.
                      [146] Hosseini H, Chen Y, Kannan S, Zhang B, Poovendran R. Blocking transferability of adversarial examples in black-box learning systems. ArXiv: 1703.04318, 2017.
                      [147] Papernot N, Mcdaniel P D. Extending defensive distillation. ArXiv: 1705.05264, 2017.
                      [148] Cao Y, Yang J. Towards making systems forget with machine unlearning. In: Proceedings of the 36th IEEE Symposium on Security and Privacy. San Jose, CA, USA: IEEE, 2015. 463?480
                      [149] Mcsherry F, Talwar K. Mechanism design via differential privacy. In: Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science. Providence, RI, USA: IEEE, 2007. 94?103
                      [150] Dwork C, Roth A. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 2014, 9: 211-407
                      [151] 張澤輝, 富瑤, 高鐵杠. 支持數據隱私保護的聯邦深度神經網絡模型研究. 自動化學報. https://doi.org/10.16383/j.aas.c200236

                      Zhang Ze-Hui, Fu Yao, Gao Tie-Gang. Research on federated deep neural network model for data privacy protection. Acta Automatica Sinica, to be published. https://doi.org/10.16383/j.aas.c200236
                      [152] Carlini N, Liu C, Erlingsson ú, Kos J, Song D. The secret sharer: Evaluating and testing unintended memorization in neural networks. In: Proceedings of the 28th USENIX Security Symposium. Santa Clara, CA, USA: USENIX Association, 2019. 267?284
                      [153] Melis L, Song C, Cristofaro E D, Shmatikov V. Exploiting unintended feature leakage in collaborative learning. In: Proceedings of the 2019 IEEE Symposium on Security and Privacy. San Francisco, CA, USA: IEEE, 2019. 691?706
                      [154] Song L, Shokri R, Mittal P. Privacy risks of securing machine learning models against adversarial examples. In: Proceedings of the 26th ACM SIGSAC Conference on Computer and Communications Security. London, UK: ACM, 2019. 241?257
                      [155] Ganju K, Wang Q, Yang W, Gunter C A, Borisov N. Property inference attacks on fully connected neural networks using permutation invariant representations. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA: ACM, 2018. 619–633
                      [156] Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks. In: The 5th International Conference on Learning Representations. Toulon, France: 2017.
                      [157] Kipf T, Welling M. Variational graph auto-encoders. ArXiv: 1611.07308, 2016.
                      [158] Hamilton W L, Ying R, Leskovec J. Inductive representation learning on large graphs. In: Proceedings of the 31st Annual Conference on Neural Information Processing Systems. Long Beach, CA, USA: MIT Press, 2017. 1025–1035
                      [159] Hou S, Ye Y, Song Y, Abdulhayoglu M. HinDroid: An intelligent Android malware detection system based on structured heterogeneous information network. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Halifax, NS, Canada: ACM, 2017. 1507–1515
                      [160] Ye Y, Hou S, Chen L, Lei J, Wan W, Wang J, et al. Out-of-sample node representation learning for heterogeneous graph in real-time Android malware detection. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence. Macao, China: Morgan Kaufmann, 2019. 4150?4156
                      [161] Fan Y, Hou S, Zhang Y, Ye Y, Abdulhayoglu M. Gotcha-sly malware! Scorpion: A metagraph2vec based malware detection system. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. London, UK: ACM, 2018. 253?262
                      [162] Zügner D, Akbarnejad A, Günnemann S. Adversarial attacks on neural networks for graph data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. London, UK: ACM, 2018. 2847?2856
                      [163] Zhu D, Cui P, Zhang Z, Zhu W. Robust graph convolutional networks against adversarial attacks. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Anchorage, AK, USA: ACM, 2019. 1399?1407
                      [164] Hou S F, Fan Y J, Zhang Y M, Ye Y F, Lei J W, Wan W Q, et al. αCyber: Enhancing robustness of Android malware detection system against adversarial attacks on heterogeneous graph based model. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management. Beijing, China: ACM, 2019. 609?618
                      [165] Sun L, Wang J, Yu P S, Li B. Adversarial attack and defense on graph data: A survey. ArXiv: 1812.10528, 2018.
                      [166] Carlini N, Wagner D. Adversarial examples are not easily detected: Bypassing ten detection methods. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. Dallas, Texas, USA: ACM, 2017. 3?14
                      [167] Carlini N, Mishra P, Vaidya T, Zhang Y, Sherr M, Shields C, et al. Hidden voice commands. In: Proceedings of the 25th USENIX Security Symposium. Austin, TX, USA: USENIX Association, 2016. 513?530
                      [168] Miller B, Kantchelian A, Afroz S, Bachwani R, Dauber E, Huang L, et al. Adversarial active learning. In: Proceedings of the 2014 ACM Artificial Intelligent and Security Workshop. Scottsdale, AZ, USA: ACM, 2014. 3?14
                    • 加載中
                    計量
                    • 文章訪問數:  453
                    • HTML全文瀏覽量:  65
                    • 被引次數: 0
                    出版歷程
                    • 收稿日期:  2021-01-28
                    • 錄用日期:  2021-06-25
                    • 網絡出版日期:  2021-08-11

                    目錄

                      /

                      返回文章
                      返回