You are here: Home Contents V21 N3 V21N3_Lombard.html
Personal tools

Exploring the Risks of AI Human Worth

 

 

Full text
View
Purchase

Source
Journal of Information Systems Security
Volume 21, Number 3 (2025)
Pages 185199
ISSN 1551-0123 (Print)
ISSN 1551-0808 (Online)
Authors
Adriaan Lombard — University of Tulsa, USA
Stephen Flowerday — Augusta University, USA
Publisher
Information Institute Publishing, Washington DC, USA

 

 

Abstract

 In an era where artificial intelligence (AI) is seamlessly integrated into daily life, our paper investigates the intersection of generative AI and cybersecurity, exploring how AI platforms like ChatGPT, Dall-E, and Midjourney, powered by Large Language Models (LLMs), along with the Big Five personality traits influence individual selfworth. We interrogate AI's capacity to meet human needs as outlined by Maslow's hierarchy of needs and its simultaneous potential to pose cybersecurity risks—especially privacy infringements and psychological manipulation. Our findings highlight significant cybersecurity challenges, including vulnerability to data breaches and exploiting personality traits, which can undermine human worth. By presenting an innovative model that marries Maslow's hierarchy with the Big Five traits, the study underscores the critical role of cybersecurity in the ethical integration of AI into society. We call for a proactive balance in AI advancement that prioritizes robust cybersecurity strategies to protect and elevate human dignity in the digital age.

 

 

Keywords

Generative AI, Human Worth, Big Five Personality Traits, Maslow’s Hierarchy, Cybersecurity.

 

 

References

Babina, T., Fedyk, A., He, A., and Hodson, J. (2023). Firm Investments in Artificial Intelligence Technologies and Changes in Workforce Composition. 1–66.SSRN.

Bawack, R. E., Wamba, S. F., and Carillo, K. D. A. (2021). Exploring the role of personality, trust, and privacy in customer experience performance during voice shopping: Evidence from SEM and fuzzy set qualitative comparative analysis. International Journal of Information Management, 58, 102309.

Bhargava, A., Bester, M., and Bolton, L. (2021). Employees’ Perceptions of the Implementation of Robotics, Artificial Intelligence, and Automation (RAIA) on Job Satisfaction, Job Security, and Employability. Journal of Technology in Behavioral Science, 6(1), 106–113.

Brauner, P., Hick, A., Philipsen, R., and Ziefle, M. (2023). What does the public think about artificial intelligence?—A criticality map to understand bias in the public perception of AI. Frontiers in Computer Science, 5. https://www.frontiersin.org/articles/10.3389/
fcomp.2023.1113903

Chakraborty, M., R. K. Singh, T. M. Hussein, R. Kler, S. Khan, and S. Mishra. (2023). Maslow’s Hierarchy- Inspired AI-Driven Employee Satisfaction Improvement. 2023 3rd International Conference on Technological Advancements in Computational Sciences (ICTACS), 987–993.

Cherry, C. (1997). Health care, human worth and the limits of the particular. Journal of Medical Ethics, 23(5), 310–314.

Correia, A., Fonseca, B., Paredes, H., Chaves, R., Schneider, D., and Jameel, S. (2021). Determinants and Predictors of Intentionality and Perceived Reliability in Human-AI Interaction as a Means for Innovative Scientific Discovery. 2021 IEEE International Conference on Big Data (Big Data), 3681– 3684.

Digman, J. M. (1990). Personality Structure: Emergence of the Five-Factor Model. Annual Review of Psychology, 41(1), 417–440.

European Commission (2021). Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence. Press release of 21 April, Brussels, https://ec.europa.eu/commission/presscorner/detail/en/
IP_21_1682.

Frauenstein, E. D., Flowerday, S., Mishi, S., and Warkentin, M. (2023). Unraveling the behavioral influence of social media on phishing susceptibility: A Personality-Habit-Information Processing model. Information & Management, 60(7), 103858.

García-Peñalvo, F. J. (2023). La percepción de la Inteligencia Artificial en contextos educativos tras el lanzamiento de ChatGPT: Disrupción o pánico. Education in the Knowledge Society (EKS), 24, e31279.

García-Peñalvo, F., and Vázquez-Ingelmo, A. (2023). What Do We Mean by GenAI? A Systematic Mapping of The Evolution, Trends, and Techniques Involved in Generative AI. International Journal of InteractiveMultimedia and Artificial Intelligence, 8(4), 7.

Gursoy, D., Chi, O. H., Lu, L., and Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157–169.

Harter, S. (2012). Self-perception profile for adolescents: Manual and questionnaires. University of Denver, 31–45.

Kelly, S., Kaye, S.-A., and Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review. Telematics and Inform- atics, 77, 101925.

Kumar, S. (2019, December 12). Advantages and Disadvantages of Artificial Intelligence. Medium. https://towardsdatascience.com/advantages-and-disadvantagesof-artificial-intelligence- 182a5ef6588c

Lebovitz, S., Levina, N., and Lifshitz-Assaf, H. (2021). Is AI Ground Truth Really True? The Dangers Of Training And Evaluating AI Tools Based On Experts’ Know-What. MIS Quarterly, 45(3).

Maslow, A. H. (1943). A Theory of Human Motivation. Classics in the History of Psychology. https://psychclassics.yorku.ca/Maslow/motivation.htm

Matthews, G., Hancock, P. A., Lin, J., Panganiban, A. R., Reinerman-Jones, L. E., Szalma, J. L., and Wohleber,

R. W. (2021). Evolution and revolution: Personality research for the coming world of robots, artificial intelligence, and autonomous systems. Personality and Individual Differences, 169, 109969.

Ngo, R., Chan, L., and Mindermann, S. (2023). The alignment problem from a deep learning perspective (arXiv:2209.00626). arXiv. http://arxiv.org/abs/2209.00626

Oksanen, A., Savela, N., Latikka, R., and Koivula, A. (2020). Trust Toward Robots and Artificial Intelligence: An Experimental Approach to Human–Technology Interactions Online. Frontiers in Psychology, 11. https://www.frontiersin.org/articles/10.3389/
fpsyg.2020.568256

Sætra, H. S. (2019). The Ghost in the Machine: Being Human in the Age of AI and Machine Learning. Human Arenas, 2(1), 60–78.

Sætra, H. S. (2022). Loving robots changing love: Towards a practical deficiency-love. Journal of Future Robot Life, 3(2), 109–127.

Sætra, H. S. (2023). Generative AI: Here to stay, but for good? Technology in Society, 75, 102372.

Sætra, H. S., and Mills, S. (2022). Psychological interference, liberty and technology. Technology in Society, 69, 101973.

Schadelbauer, L., Schlögl, S., and Groth, A. (2023). Linking Personality and Trust in Intelligent Virtual Assistants. Multimodal Technologies and Interaction, 7, Article 6.

Van Der Schyff, K., Flowerday, S., and Lowry, P. B. (2020). Information privacy behavior in the use of Facebook apps: A personality-based vulnerability assessment. Heliyon, 6(8), e04714.

Sebastian, G. (2023). Do ChatGPT and Other AI Chatbots Pose a Cybersecurity Risk?: An Exploratory Study. International Journal of Security and Privacy in Pervasive Computing, 15(1), 1–11.

Sheng, H., and Chen, Y. (2020). An Empirical Study on Factors influencing Users’ Psychological Reactance to Artificial Intelligence Applications. 2020 7th International Conference on Information Science and Control Engineering (ICISCE), 234–237.

Shevlin, H. (2022). Title: All too human? Identifying and mitigating ethical risks of Social AI. PhilPapers, 1–15. https://philpapers.org/rec/SHEATH-4

Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., and Anderson, R. (2023). The Curse of Recursion: Training on Generated Data Makes Models Forget (arXiv:2305.17493; Version 2). arXiv. http://arxiv.org/abs/2305.17493

Sindermann, C., Yang, H., Elhai, J. D., Yang, S., Quan, L., Li, M., and Montag, C. (2022). Acceptance and Fear of Artificial Intelligence: Associations with personality in a German and a Chinese sample. Discover Psychology, 2(1), 8.

Song, H., W, H., Mi, Y.-L., and S, Y. (2017). Analysis of AI Development and the Relationship of AI to IoT Security. DEStech Transactions on Computer Science and Engineering.

Veselovsky, V., Ribeiro, M. H., and West, R. (2023). Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks (arXiv:2306.07899). arXiv. http://arxiv.org/abs/2306.07899

van der Zant, T., Kouw, M., and Schomaker, L. (2013). Generative Artificial Intelligence. In V. C. Müller (Ed.), Philosophy and Theory of Artificial Intelligence (pp. 107–120). Springer Berlin Heidelberg.