AI-Powered Language Models and the Issue of National Security

Autor

DOI:

https://doi.org/10.34752/1rck4253

Słowa kluczowe:

artificial intelligence, disinformation, national security, cybercrime

Abstrakt

This study investigates the dual-use nature of AI-powered language models, focusing on their implications for national security. Through experimental research utilizing widely accessible AI tools such as ChatGPT and Bing Chat, the authors demonstrate how these technologies can be leveraged to generate credible false information across multiple languages, even by users lacking linguistic expertise. The research highlights the ease of access to such tools globally and identifies methods by which safeguards can be circumvented to produce misleading content. The findings reveal that while advanced AI models incorporate certain protections against explicit disinformation, older versions remain susceptible to manipulation, posing significant risks to social stability and national security. The study underscores the urgent need for thematic filters, ethical frameworks, and international cooperation to establish standards and regulatory measures that limit AI misuse, particularly during crises. Ultimately, the authors advocate for responsible management and oversight of AI technologies to mitigate their potential for harm in the context of information warfare and hybrid threats.

Pobrania

Statystyki pobrań niedostępne.

Bibliografia

Agarwal S., Agarwal B., Gupta R., Chatbots and Virtual Assistants: A Bibliometric Analysis, “Library Hi Tech” 2022, t. 40. https://doi.org/10.1108/LHT-09-2021-0330.
Pokaż w Google Scholar

Adamopoulou E., Moussiades L., A Survey of Chatbot Technologies, Applications and Innovations in Artificial Intelligence, Springer International Publishing, 2020, s. 373–383. https://doi.org/10.1007/978-3-030-49186-4_31.
Pokaż w Google Scholar

Al-Fuqaha A., Guizani M., Mohammadi M., Aledhari M., Ayyash M., Internet of Things: A Survey of Technologies, Protocols, and Supporting Applications, „IEEE Communications Surveys & Tutorials” 2018, vol. 17, nr 4, s. 2347–2376.
Pokaż w Google Scholar

Beaunoyer E., Dupéré S., Guitton M.J., COVID-19 and Digital Inequalities: Mutual Effects and Mitigation Strategies, “Computers in Human Behavior” 2020, t. 111. https://doi.org/10.1016/j.chb.2020.106424.
Pokaż w Google Scholar

Berger J.M., Declining Profits of ISIS on Twitter, “The Atlantic”, January 2016.
Pokaż w Google Scholar

Boon Ng S., Exploring STEM Competences for the 21st Century, [in:] Current and Critical Issues in Curriculum, Learning, and Assessment, 2019, t. 30, s. 53.
Pokaż w Google Scholar

Burrows L., The Present and Future of Artificial Intelligence, “Harvard John A. Paulson School of Engineering and Applied Sciences” 2021.
Pokaż w Google Scholar

Centola D., Macy M., The Role of Uncertainty in the Transmission of Peer Influence, “Social Networks” 2007, t. 29, nr 4, s. 501–518.
Pokaż w Google Scholar

Chang K., Hobbs W.R., Roberts M.E., Steinert-Threlkeld Z.C., The COVID-19 Pandemic Increased Circumvention of Censorship and Access to Sensitive Topics in China, “PNAS” 2022, t. 119, nr 4, s. E2102818119. https://doi.org/10.1073/pnas.2102818119.
Pokaż w Google Scholar

Chilton J., New Cybersecurity Threats Created by ChatGPT, “Harvard Business Review” 2023.
Pokaż w Google Scholar

Davenport T.H., Mittal N., How Generative Artificial Intelligence is Changing Creative Work, “Harvard Business Review” 2022.
Pokaż w Google Scholar

Duta K.A., Detecting Phishing Websites Using Machine Learning Techniques, “PLoS ONE” 2021. https://doi.org/10.1371/journal.pone.0258361.
Pokaż w Google Scholar

Fjeld J., Achten N., Hilligoss H., Nagy A., Srikumar M., Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI, “Berkman Klein Center for Internet & Society” 2020.
Pokaż w Google Scholar

Fukushi N., Chiba D., Akiyama M., Uchida M., Comprehensive Measurement of Cloud Service Abuse, “Journal of Information Processing” 2021, t. 29, s. 93–102. https://doi.org/10.2197/ipsjjip.29.93.
Pokaż w Google Scholar

Gasparini M., Tarquini D., Pucci E., Alberti F., D’Alessandro R., Marogna M., Veronese S., Porteri C., Conflicts of Interest and Scientific Societies, “Neurological Sciences” 2020, t. 41, nr 8, s. 2095–2102.
Pokaż w Google Scholar

Gruetzemacher R., The Power of Natural Language Processing, “Harvard Business Review” 2022.
Pokaż w Google Scholar

Hull G., John H., Arief B.R., Ransomware Software Implementation and Analysis Methods: Insights from a Predictive Model and Human Response, “Crime Sci” 2019, t. 8, s. 2. https://doi.org/10.1186/s40163-019-0097-9.
Pokaż w Google Scholar

Karpoff J.M., The Future of Financial Fraud, “Journal of Corporate Finance” Forthcoming, Published July 9, 2020. https://ssrn.com/abstract=3642913.
Pokaż w Google Scholar

Kim B., Xiong A., Lee D., Han K., A Systematic Review of Fake News Research through the Lens of News Creation and Consumption: Research Efforts, Challenges, and Future Directions, “PLOS ONE” December 2021.
Pokaż w Google Scholar

Kott A., AICA’s Value and Challenges: Risk, Resilience, and Quantification,[in:] 2nd International Conference on Autonomous Intelligent Cyber-defence Agents, Bordeaux 2022, DOI: 10.13140/RG.2.2.19717.63200.
Pokaż w Google Scholar

Koubaa A., Boulila W., Ghouti L., Alzahem A., Lati S., Discovering Opportunities and Limitations of ChatGPT: A Critical Review of NLP Game Changer, “Preprints” 2023. https://doi.org/10.20944/preprints202303.0438.v1.
Pokaż w Google Scholar

Kromme C., Large AI Language Models Under Fire: A Country-Level Update, “The Conference Board” 2023.
Pokaż w Google Scholar

Lazer D.M.J., Baum M.A., Benkler Y. et al., The Science of Fake News, “Science” 2018, t. 359, nr 6380, s. 1094–1096. https://doi.org/10.1126/science.aao2998.
Pokaż w Google Scholar

Md Nurul Momen M., Freedom of Speech in the Digital Era: Internet Censorship, “SpringerLink”, May 2020.
Pokaż w Google Scholar

Mittal M., Kumar K., Behal S., Deep Learning Approaches for DDoS Attack Detection: A Systematic Review, “Soft Computing” 2022. https://doi.org/10.1007/s00500-021-06608-1.
Pokaż w Google Scholar

Nowosielski R., Research Objectives in Management. Methodological Aspects, “Scientific Journals of the Wrocław University of Economics” 2016, t. 421, s. 9–23.
Pokaż w Google Scholar

Parthasarathy S., How AI-Generated Languages Could Transform Science, “Nature” 2022, t. 595, nr 7861, s. 22–24.
Pokaż w Google Scholar

Radford A., Wu J., Child R., Luan D., Amodei D., Sutskever I., Language Models are Unsupervised Multitask Learners, “OpenAI Blog” 2019.
Pokaż w Google Scholar

Sarzyńska J., Pawlak A., Szymanowska J., Hanusz K., Wawer A., Truth or Lie: Discovering the Language of Deception, “PLOS ONE” 2023, t. 18, nr 2, s. E0281179. https://doi.org/10.1371/journal.pone.0281179.
Pokaż w Google Scholar

Shaikh A.R., Alhoori H., Sun M., YouTube and Learning: Models of Research Impact, “Scientometrics” 2023, t. 128, s. 933–955. https://doi.org/10.1007/s11192-022-04574-5.
Pokaż w Google Scholar

Siderska J., Robotic Process Automation – A Driver of Digital Transformation?, “Engineering Management in Production and Services” 2020, t. 12, nr 6, s. 21–31.
Pokaż w Google Scholar

Solaiman I., Brundage M., Clark J. et al., Release Strategies and Social Impacts of Language Models, “arXiv preprint” 2019, arXiv:1908.09203.
Pokaż w Google Scholar

Solove D.J., Identity Theft, Privacy, and the Architecture of Security Gaps, “SSRN Electronic Journal” 2004.
Pokaż w Google Scholar

UNESCO, Artificial Intelligence and Gender Equality: Key Findings from UNESCO’s Global Dialogue, 2020.
Pokaż w Google Scholar

van Dis E.A.M., Bollen J., Zuidema W., van Rooij R., Bockting C.L., ChatGPT: Five Research Priorities, “Nature” 2023.
Pokaż w Google Scholar

ENISA, AI Cybersecurity and Standardization, 2023, https://www.enisa.europa.eu/publications/cybersecurity-of-ai-and-standardisation.
Pokaż w Google Scholar

European Union, Artificial Intelligence Act 2024, 2024, https://artificialintelligenceact.eu/wp-content/uploads/2024/01/AIA-Final-Draft-21-January-2024.pdf.
Pokaż w Google Scholar

ISO/IEC Standard. ISO/IEC 27001:2022, “ISO.org”, 2022, https://www.iso.org/standard/27001.html.
Pokaż w Google Scholar

ISO/IEC Standard. ISO/IEC 38507:2022, “ISO.org”, 2022, https://www.iso.org/standard/38507.html.
Pokaż w Google Scholar

NATO Review, Countering Cognitive Warfare: Awareness and Resilience, 2021, https://www.nato.int/docu/review/articles/2021/05/20/countering-cognitive-warfare-awareness-and-resilience/index.html.
Pokaż w Google Scholar

Wikipedia, Proxy Server, https://en.wikipedia.org/wiki/Proxy_server.
Pokaż w Google Scholar

IBM, What is an Application Programming Interface (API)?, https://www.ibm.com/topics/api.
Pokaż w Google Scholar

Pobrania

Opublikowane

31-12-2025

Numer

Dział

Artykuły recenzowane

Jak cytować

Gudzbeler, Grzegorz, Kornela Oblińska, and Grzegorz Borowik. 2025. “AI-Powered Language Models and the Issue of National Security”. Wiedza Obronna 293 (4): 177-93. https://doi.org/10.34752/1rck4253.

Podobne artykuły

1-10 z 194

Możesz również Rozpocznij zaawansowane wyszukiwanie podobieństw dla tego artykułu.