Defense-in-Depth Model of Countermeasures against Adversarial AI Attacks: Literature Review and Classification
Full text | |||
Source | Journal of Information Systems Security Volume 21, Number 1 (2025)
Pages 51–84
ISSN 1551-0123 (Print)ISSN 1551-0808 (Online) |
||
Authors | Pavankumar Mulgund — University of Memphis, USA
Raghvendra Singh — University at Buffalo, USA
Raj Sharman — University at Buffalo, USA
Manish Gupta — University at Buffalo, USA
Ameya Shastri Pothukuchi — Microsoft, USA
|
||
Publisher | Information Institute Publishing, Washington DC, USA |
Abstract
The proliferation of artificial intelligence (AI) applications in mainstream businesses has led to a substantial rise in the threat of adversarial artificial intelligence (AAI) attacks. Consequently, it becomes imperative to devise effective countermeasures to mitigate such risks. While the research community has made progress in developing specific countermeasures and controls, a comprehensive synthesis of existing literature, providing an overarching perspective on safeguards against AAI attacks, has been lacking. This paper aims to bridge that gap in the scholarly discourse by presenting a holistic view of countermeasures against AAI attacks. Further, the paper employs a systematic classification of identified countermeasures into three categories: preventive, detective, and corrective controls, based on the defense in depth (D-i-D) model. This framework offers valuable insights for cybersecurity managers, auditors, leaders overseeing AI technologies, and researchers. Our findings reveal a significant emphasis on the development of automated preventive and detective controls to counter AAI attacks. However, there remains a need for further research on procedural or process-based controls and regulatory compliance to enhance the resilience of AI systems.
Keywords
Defense in Depth, Adversarial Artificial Intelligence, Security Controls, Countermeasures and Safeguards, Literature Review and Classification.
References
Abadi, M., and Andersen, D. G. (2016), “Learning To Protect Communications With Adversarial Neural Cryptography,” CoRR, abs/1610.06918. Retrieved from http://arxiv.org/abs/1610.06918.
Abdelaty, M., Scott-Hayward, S., Doriguzzi-Corin, R., and Siracusa, D. (2021, October). ‘Gadot: Gan-based adversarial training for robust ddos attack detection’. IEEE Conference on Communications and Network Security (pp. 119-127). IEEE.
Al-Dujaili, A., Huang, A., Hemberg, E., and O’Reilly, U. M. (2018, May). ‘Adversarial deep learning for robust detection of binary encoded malware’. IEEE Security and Privacy Workshops (pp. 76-82). IEEE.
Ali, T. Eleyan, A. and Bejaoui, T. (2023, October). ‘Detecting Conventional and Adversarial Attacks Using Deep Learning Techniques: A Systematic Review’. International Symposium on Networks, Computers and Communications (ISNCC) (pp. 1-7). IEEE.
Ali, Y. M. B. (2023), “Adversarial attacks on deep learning networks in image classification based on Smell Bees Optimization Algorithm,” Future Generation Computer Systems, 140: 185-195.
Alsaqour, R., Majrashi, A., Alreedi, M., Alomar, K., and Abdelhaq, M. (2021), “Defense in Depth: Multilayer of security,” International Journal of Communication Networks and Information Security, 13(2): 242-248.
Amarasinghe, K., Kenney, K., and Manic, M. (2018, July). ‘Toward explainable deep neural network based anomaly detection’. 2018 11th international conference on human system interaction (HSI) (pp. 311-317). IEEE.
Andriushchenko, M., Croce, F., Flammarion, N., and Hein, M. (2020, August). ‘Square attack: a query-efficient black-box adversarial attack via random search’. European conference on computer vision (pp. 484-501). Springer International Publishing.
Athalye, A., Carlini, N., and Wagner, D. (2018, July). ‘Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples’. In International conference on machine learning (pp. 274-283). PMLR.
Bai, T., Luo, J., Zhao, J., Wen, B., and Wang, Q. (2021), “Recent Advances In Adversarial Training For Adversarial Robustness,” CoRR, abs/2102.01356. Retrieved from http://arxiv.org/abs/2102.01356.
Batarseh, F. A., Freeman, L., and Huang, C. H. (2021), “A survey on artificial intelligence assurance,” Journal of Big Data, 8(1): 60.
Bécue, A., Praça, I., and Gama, J. (2021), “Artificial intelligence, cyber-threats and Industry 4.0: Challenges and opportunities,” Artificial Intelligence Review, 54(5): 3849-3886.
Bellamy, R. K., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., ... and Zhang, Y. (2019), “AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias,” IBM Journal of Research and Development, 63(4/5): 4-1.
Biggio, B., Corona, I., Nelson, B., Rubinstein, B. I., Maiorca, D., Fumera, G., ... and Roli, F. (2014), “Security evaluation of support vector machines in adversarial environments,” Support vector machines applications, 105-153.
Brocke, J. V., Simons, A., Niehaves, B., Niehaves, B., Reimer, K., Plattfaut, R., and Cleven, A. (2009). ‘Reconstructing the giant: On the importance of rigour in documenting the literature search process’. European Conference on Information Systems.
Bukhari, M., Yasmin, S., Gillani, S., Maqsood, M., Rho, S., and Yeo, S. S. (2023), “Secure Gait Recognition-Based Smart Surveillance Systems Against Universal Adversarial Attacks,” Journal of Database Management (JDM), 34(2): 1-25.
Carlini, N., and Wagner, D. (2016). “Defensive Distillation Is Not Robust To Adversarial Examples,” CoRR, abs/1607.04311. Retrieved from http://arxiv.org/abs/1607.04311.
Carlini, N., and Wagner, D. (2018, May). ‘Audio adversarial examples: Targeted attacks on speech-to-text’. 2018 IEEE security and privacy workshops (SPW) (pp. 1-7). IEEE.
Chen, J., Su, M., Shen, S., Xiong, H., and Zheng, H. (2019), “POBA-GA: Perturbation optimized black-box adversarial attacks via genetic algorithm,” Computers & Security, 85: 89-106.
Chen, P. Y., Zhang, H., Sharma, Y., Yi, J., and Hsieh, C. J. (2017, November). ‘Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models’. Proceedings of the 10th ACM workshop on artificial intelligence and security (pp. 15-26).
Cheng, M., Le, T., Chen, P. Y., Yi, J., Zhang, H., and Hsieh, C. J. (2018), “Query-Efficient Hard-Label Black-Box Attack: An Optimization-Based Approach,” CoRR, abs/1807.04457. Retrieved from http://arxiv.org/abs/1807.04457.
Cherepanova, V., Goldblum, M., Foley, H., Duan, S., Dickerson, J., Taylor, G., and Goldstein, T. (2021), “Lowkey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition,” CoRR, abs/2101.07922. Retrieved from http://arxiv.org/abs/2101.07922.
Chow, S., Eisen, P., Johnson, H., and Van Oorschot, P. C. (2003). White-box cryptography and an AES implementation. ‘Selected Areas in Cryptography: 9th Annual International Workshop, SAC 2002’. St. John’s, Newfoundland, Canada, August 15–16, 2002 Revised Papers 9 (pp. 250-270). Springer Berlin Heidelberg.
Comiter, M. (2019). ‘Attacking artificial intelligence’. Belfer Center Paper, 8, 2019-08.
Dai, H., Li, H., Tian, T., Huang, X., Wang, L., Zhu, J., and Song, L. (2018, July). ‘Adversarial attack on graph structured data’. International conference on machine learning (pp. 1115-1124). PMLR.
Fu, S., He, F., Liu, Y., Shen, L., and Tao, D. (2022), “Robust Unlearnable Examples: Protecting Data Against Adversarial Learning,” CoRR, abs/2203.14533. Retrieved from http://arxiv.org/abs/2203.14533.
Goasduff, L. (2021, September 22). ‘The 4 trends that prevail on the gartner hype cycle for AI, 2021’, https://www.gartner.com/en/articles/the-4-trends-that-prevail-on-the-gartnerhype-cycle-for-ai-2021, 22 September 2021.
Gupta, K. D., Dasgupta, D., and Akhtar, Z. (2021), “Determining sequence of image processing technique (IPT) to detect adversarial attacks,” SN Computer Science, 2(5):383.
Hagendorff, T. (2022), “Blind spots in AI ethics,” AI and Ethics, 2(4): 851-867.
Hao, X., Ren, W., Xiong, R., Zhu, T., and Choo, K. K. R. (2021), “Asymmetric cryptographic functions based on generative adversarial neural networks for Internet of Things,” Future Generation Computer Systems, 124: 243-253.
Islam, M. J., Pan, R., Nguyen, G., and Rajan, H. (2020, June). ‘Repairing deep neural networks: Fix patterns and challenges’. Proceedings of the ACM/IEEE 42nd international conference on software engineering (pp. 1135-1146).
Jiang, L., Ma, X., Chen, S., Bailey, J., and Jiang, Y. G. (2019, October). ‘Black-box adversarial attacks on video recognition models’. Proceedings of the 27th ACM International Conference on Multimedia (pp. 864-872).
Kazim, E., and Koshiyama, A. (2020), “AI assurance processes,” Available at SSRN 3685087.
King, T. (2019), ‘Projecting AI-Crime: A Review of Plausible Threats’, in The 2018 Yearbook of the Digital Ethics Lab, 65-84.
Kiru, M. U., and Jantan, A. B. (2019), ‘The age of ransomware: Understanding ransomware and its countermeasures,’ in Artificial Intelligence and Security Challenges in Emerging Networks (pp. 1-37). IGI Global.
Kitchenham, B. A. (2012, September). ‘Systematic review in software engineering:where we are and where we should be going’. Proceedings of the 2nd international workshop on Evidential assessment of software technologies (pp. 1-2).
Kong, Z., Xue, J., Liu, Z., Wang, Y., and Han, W. (2023), “MalDBA: Detection for Query-Based Malware Black-Box Adversarial Attacks,” Electronics, 12(7): 1751.
Kumar, A., Braud, T., Tarkoma, S., and Hui, P. (2020, March). ‘Trustworthy AI in the age of pervasive computing and big data’. IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) (pp. 1-6). IEEE.
Lamba, A., Singh, B., Singh, S., Dutta, N., Sai, S., and Muni, R. (2016), “S4: A novel & secure method for enforcing privacy in cloud data warehouses,” International Journal for Technological Research in Engineering, 3(8): 5707-5710.
Laskov, P., and Lippmann, R. (2010), “Machine learning in adversarial environments,” Machine learning, 81: 115-119.
Le, T., Bui, A. T., Zhao, H., Montague, P., Tran, Q., and Phung, D. (2022, May). ‘On global-view based defense via adversarial attack and defense risk guaranteed bounds’. International Conference on Artificial Intelligence and Statistics (pp. 11438-11460).
Lecuyer, M., Atlidakis, V., Geambasu, R., Hsu, D., and Jana, S. (2019, May). ‘Certified robustness to adversarial examples with differential privacy’. 2019 IEEE symposium on security and privacy (SP) (pp. 656-672). IEEE.
Lee, K., Lee, K., Lee, H., and Shin, J. (2018), “A simple unified framework for detecting out-of-distribution samples and adversarial attacks,” Advances in neural information processing systems, 31.
Li, H., Li, G., and Yu, Y. (2019), “ROSA: Robust salient object detection against adversarial attacks,” IEEE transactions on cybernetics, 50(11): 4835-4847.
Liang, B., Li, H., Su, M., Li, X., Shi, W., and Wang, X. (2018), “Detecting adversarial image examples in deep neural networks with adaptive noise reduction,” IEEE Transactions on Dependable and Secure Computing, 18(1): 72-85.
Liu, D., Yu, R., and Su, H. (2019, September). ‘Extending adversarial attacks and defenses to deep 3d point cloud classifiers’. 2019 IEEE International Conference on Image Processing (ICIP) (pp. 2279-2283). IEEE.
Liu, Y., Ning, P., and Reiter, M. K. (2011), “False data injection attacks against state estimation in electric power grids,” ACM Transactions on Information and System Security (TISSEC), 14(1): 1-33.
Luo, B., Liu, Y., Wei, L., and Xu, Q. (2018). Towards imperceptible and robust adversarial example attacks against neural networks. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).
Marković-Petrović, J. D. (2020), ‘Methodology for Cyber Security Risk Mitigation in Next Generation SCADA Systems’ in Cyber Security of Industrial Control Systems in the Future Internet Environment (pp. 27-46). IGI Global.
Mahabadi, R. K., Mai, F., and Henderson, J. (2019). Learning entailment-based sentence embeddings from natural language inference. Makarius, E. E., Mukherjee, D., Fox, J. D., and Fox, A. K. (2020), “Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization,” Journal of Business Research, 120: 262-273.
Massoli, F. V., Carrara, F., Amato, G., and Falchi, F. (2021), “Detection of face recognition adversarial attacks,” Computer Vision and Image Understanding, 202:103103.
Milton, M. A. A. (2018), “Evaluation Of Momentum Diverse Input Iterative Fast Gradient Sign Method (M-DI2-FGSM) Based Attack Method On MCS 2018 Adversarial Attacks On Black Box Face Recognition System” , CoRR, abs/1806.08970. Retrieved from http://arxiv.org/abs/1806.08970.
Mishra, S., Li, X., Pan, T., Kuhnle, A., Thai, M. T., and Seo, J. (2016), “Price modification attack and protection scheme in smart grid,” IEEE Transactions on Smart Grid, 8(4): 1864-1875.
Mo, Y., and Sinopoli, B. (2010, April). ‘False data injection attacks in control systems’. 1st workshop on Secure Control Systems (Vol. 1) (pp. 1-6).
Moon, S., An, G., and Song, H. O. (2022). ‘Preemptive image robustification for protecting users against man-in-the-middle adversarial attacks’. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7): 7823-7830.
Muneer, S., Farooq, U., Athar, A., Ahsan Raza, M., Ghazal, T. M., and Sakib, S. (2024), “A Critical Review of Artificial Intelligence Based Approaches in Intrusion Detection: A Comprehensive Analysis,” Journal of Engineering, 2024(1): 3909173.
Papernot, N., McDaniel, P., Sinha, A., and Wellman, M. (2016), “Towards The Science Of Security And Privacy In Machine Learning,” CoRR, abs/1611.03814. Retrieved from http://arxiv.org/abs/1611.03814.
Papernot, N., McDaniel, P., Wu, X., Jha, S., and Swami, A. (2016). ‘Distillation as a defense to adversarial perturbations against deep neural networks’. IEEE symposium on security and privacy (SP) (pp. 582-597). IEEE.
Pare, G., Tate, M., Johnstone, D., and Kitsiou, S. (2016), “Contextualizing the twin concepts of systematicity and transparency in information systems literature reviews,” European Journal of Information Systems, 25(6): 493-508.
Pauling, C., Gimson, M., Qaid, M., Kida, A., and Halak, B. (2022), “A Tutorial On Adversarial Learning Attacks And Countermeasures,” CoRR, abs/2202.10377. Retrieved from http://arxiv.org/abs/2202.10377.
Pendleton, S. D., Andersen, H., Du, X., Shen, X., Meghjani, M., Eng, Y. H., ... and Ang, M. H. (2017), “Perception, planning, control, and coordination for autonomous vehicles,”Machines, 5(1): 6.
Qiu, S., Liu, Q., Zhou, S., and Wu, C. (2019), “Review of artificial intelligence adversarial attack and defense technologies,” Applied Sciences, 9(5): 909.
Qureshi, A. U. H., Larijani, H., Yousefi, M., Adeel, A., and Mtetwa, N. (2020), “An adversarial approach for intrusion detection systems using jacobian saliency map attacks (jsma) algorithm,” Computers, 9(3): 58.
Rakin, A. S., He, Z., Li, J., Yao, F., Chakrabarti, C., and Fan, D. (2021), “T-bfa: Targeted bit-flip adversarial weight attack,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11): 7928-7939.
Rashid, A., and Such, J. (2023), “StratDef: Strategic defense against adversarial attacks in ML-based malware detection,” Computers & Security, 134: 103459.
Ren, K., Zheng, T., Qin, Z., and Liu, X. (2020), “Adversarial attacks and defenses in deep learning,” Engineering, 6(3): 346-360.
Rocha, F., Gross, T., and Van Moorsel, A. (2013, March). ‘Defense-in-depth against malicious insiders in the cloud’. 2013 IEEE International Conference on Cloud Engineering (IC2E) (pp. 88-97). IEEE.
Rowe, F. (2014), “What literature review is not: diversity, boundaries and recommendations,” European Journal of Information Systems, 23(3): 241-255.
Sarvar, A., and Amirmazlaghani, M. (2023), “Defense against adversarial examples based on wavelet domain analysis,” Applied Intelligence, 53(1): 423-439.
Saxe, J., and Berlin, K. (2015). ‘Deep neural network based malware detection using two dimensional binary program features’. 10th international conference on malicious and unwanted software (MALWARE (pp. 11-20).
Shao, Z., Wu, Z., and Huang, M. (2021), “Advexpander: Generating natural language adversarial examples by expanding text,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30: 1184-1196.
Sharma, Y., and Chen, P. Y. (2018). “Bypassing Feature Squeezing By Increasing Adversary Strength,” CoRR, abs/1803.09868. Retrieved from http://arxiv.org/abs/1803.09868.
Shi, Y., Sagduyu, Y. E., Davaslioglu, K., and Li, J. H. (2018, December). ‘Generative adversarial networks for black-box API attacks with limited training data’. IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) (pp. 453-458). IEEE.
Simon-Gabriel, C. J., Sheikh, N. A., and Krause, A. (2021, July). ‘Popskipjump: Decision-based attack for probabilistic classifiers’. International Conference on Machine Learning (pp. 9712-9721). PMLR.
Song, J., Li, Z., Hu, Z., Wu, Y., Li, Z., Li, J., and Gao, J. (2020). ‘Poisonrec: an adaptive data poisoning framework for attacking black-box recommender systems’. IEEE 36th international conference on data engineering (ICDE) (pp. 157-168). IEEE.
Suthar, A. C., Joshi, V., and Prajapati, R. (2022), ‘A review of generative adversarialbased networks of machine learning/artificial intelligence in healthcare’, in Handbook of Research on Lifestyle Sustainability and Management Solutions Using AI, Big Data Analytics, and Visualization, 37-56.
Tabassi, E., Burns, K. J., Hadjimichael, M., Molina-Markham, A. D., and Sexton, J. T. (2019), “A taxonomy and terminology of adversarial machine learning,” NIST IR, 1-29.
Tang, Y., and Wu, X. (2019), “Salient object detection using cascaded convolutional neural networks and adversarial learning,” IEEE Transactions on Multimedia, 21(9): 2237-2247.
Templier, M., and Pare, G. (2018), “Transparency in literature reviews: an assessment of reporting practices across review types and genres in top IS journals,” European Journal of Information Systems, 27(5): 503-550.
Theagarajan, R., and Bhanu, B. (2020). ‘Defending black box facial recognition classifiers against adversarial attacks’. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops (pp. 812-813).
Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P. (2017), “Ensemble Adversarial Training: Attacks And Defenses,” CoRR, abs/1705.07204. Retrieved from http://arxiv.org/abs/1705.07204.
Tsai, Y. Y., Chao, J. C., Wen, A., Yang, Z., Mao, C., Shah, T., and Yang, J. (2023), “Test-Time Detection And Repair Of Adversarial Samples Via Masked Autoencoder,” CoRR, abs/2303.12848. Retrieved from http://arxiv.org/abs/2303.12848.
Vähäkainu, P., Lehto, M., and Kariluoto, A. (2021, February). ‘Adversarial Poisoning Attack's Impact on Prediction Functionality of ML-Based Feedback Loop System in Cyber-Physical Context’. ICCWS 2021 16th International Conference on Cyber Warfare and Security (p. 373). Academic Conferences Limited.
Vivek, B. S., Baburaj, A., and Babu, R. V. (2019, June). ‘Regularizer to Mitigate Gradient Masking Effect During Single-Step Adversarial Training’. CVPR Workshops (pp. 66)
Wang, G., Wang, T., Zheng, H., and Zhao, B. Y. (2014). ‘Man vs. machine: Practical adversarial detection of malicious crowdsourcing workers’. 23rd USENIX Security Symposium (USENIX Security 14) (pp. 239-254).
Wang, K., Sun, W., and Du, Q. (2021). A non-cooperative meta-modeling game for automated third-party calibrating, validating and falsifying constitutive laws with parallelized adversarial attacks,” Computer Methods in Applied Mechanics and Engineering, 373: 113514.
Wang, S. P., Shumba, R., and Kelly, W. (2017, February), “Security by design: Defensein-depth iot architecture. In Journal of The Colloquium for Information Systems Security Education (pp. 15-15).
Woldeyohannes, H. D. (2021). ‘Review on “Adversarial Robustness Toolbox (ART) v1. 5. x.”: ART Attacks against Supervised Learning Algorithms Case Study’. Department of Environmental Sciences, Informatics and Statistics, Ca’ Foscari University of Venice. Unpublished Thesis.
World Economic Forum (2019). ‘AI raises the risk of cyberattack – and the best defence is more AI’, https://www.weforum.org/agenda/2019/04/how-ai-raises-thethreat-of-cyberattack-and-why-the-best-defence-is-more-ai-5eb78ba081/, April 4 2019.
Wu, D., Wu, T., and Wu, X. (2020). ‘A differentially private random decision tree classifier with high utility’. Machine Learning for Cyber Security: Third International Conference, ML4CS 2020, Guangzhou, China, October 8–10, 2020, Proceedings, Part I 3 (pp. 376-385). Springer International Publishing.
Wu, X., Wu, T., Khan, M., Ni, Q., and Dou, W. (2017), “Game theory based correlated privacy preserving analysis in big data,” IEEE Transactions on Big Data, 7(4): 643-656.
Wu, X., Qi, L., Gao, J., Ji, G., and Xu, X. (2022), “An ensemble of random decision trees with local differential privacy in edge computing,” Neurocomputing, 485: 181-195.
Xiang, Y., Xu, Y., Li, Y., Ma, W., Xuan, Q., and Liu, Y. (2020), “Side-channel gray-box attack for dnns,” IEEE Transactions on Circuits and Systems II: Express Briefs, 68(1): 501-505.
Xu, J., Sun, Y., Jiang, X., Wang, Y., Wang, C., Lu, J., and Yang, Y. (2022, June). ‘Blindfolded attackers still threatening: Strict black-box adversarial attacks on graphs’. Proceedings of the AAAI Conference on Artificial Intelligence.
Xu, W., Evans, D., and Qi, Y. (2017), “Feature Squeezing Mitigates And Detects Carlini/Wagner Adversarial Examples,” CoRR, abs/1705.10686. Retrieved from http://arxiv.org/abs/1705.10686.
Yang, L., Song, Q., and Wu, Y. (2021), “Attacks on state-of-the-art face recognition using attentional adversarial attack generative network.,”Multimedia tools and applications, 80: 855-875.
Yin, M., Zhang, Y., Li, X., and Wang, S. (2018, October). ‘When deep fool meets deep prior: Adversarial attack on super-resolution network’. Proceedings of the 26th ACM international conference on Multimedia (pp. 1930-1938).
Zhang, Y., Nie, S., Liang, S., and Liu, W. (2021), “Robust text image recognition via adversarial sequence-to-sequence domain adaptation,” IEEE Transactions on Image Processing, 30: 3922-3933.
Zhang, Y., Song, Y., Liang, J., Bai, K., and Yang, Q. (2020). ‘Two sides of the same coin: White-box and black-box attacks for transfer learning’. Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 2989-2997).
Zhang, Z., Cutkosky, A., and Paschalidis, I. (2022, May). ‘Adversarial tracking control via strongly adaptive online learning with memory’. International Conference on Artificial Intelligence and Statistics (pp. 8458-8492). PMLR.
Zhu, F., Chang, X., Zeng, R., and Tan, M. (2019), “Continual Reinforcement Learning With Diversity Exploration And Adversarial Self-Correction,” CoRR, abs/1906.09205. Retrieved from http://arxiv.org/abs/1906.09205.
Zhu, T., and Philip, S. Y. (2019, July). ‘Applying differential privacy mechanism in artificial intelligence’. IEEE 39th international conference on distributed computing systems (ICDCS) (pp. 1601-1609). IEEE.