Research and Development of Chatbot in National Defense Information
Main Article Content
Abstract
A chatbot is a type of Artificial Intelligence (AI) that falls within the field of Natural Language Processing (NLP). It is a machine learning technology that enables computers to interpret, manage, and understand human language. Organizations today have large amounts of data from various communication channels such as emails, text messages, social media news feeds, videos, audio, and more. They use NLP software to automatically process this data. The purpose is to experiment with chatbot systems for a defense journal so that they can provide knowledge and answer questions for external individuals regarding defense technology. The accuracy of the chatbot system in answering questions is found to be between 80-100%, indicating that it performs very well with the data available in the defense journal.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Journal of TCI is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence, unless otherwise stated. Please read our Policies page for more information...
References
L. Ouyang et al., “Training Language Models to Follow Instructions with Human Feedback,” 2022, arXiv:2203.02155.
X. Ma, Y. Gong, P. He, H. Zhao, and N. Duan, “Query Rewriting for Retrieval-augmented Large Language Models,” 2023, arXiv:2305.14283.
A. Vaswani et al., “Attention is all you need,” in 31st Conf. Neural Inf. Process. Syst. (NIPS 2017), Long Beach, CA, USA, 2017, pp. 6000-6010.
C. C. H. Chan, Y. - C. Lin, and P. - C. Shih, “Natural Language Processing for Supply Chain with Chat GPT,” in 2023 IEEE 5th Int. Conf. Architecture, Construction, Environ. and Hydraul. (ICACEH), Taichung, Taiwan, 2023, pp. 28 - 31, doi: 10.1109/ICACEH59552.2023.10452639.
M. Chen et al., “Evaluating Large Language Models Trained on Code,” 2021, arXiv:2107.03374.
A. Chowdhery et al., “PaLM: Scaling Language Modeling with Pathways. J. Mach. Learn. Res., vol. 24, pp. 1 – 113, 2023.
W. Yu et al., “Generate rather than Retrieve: Large Language Models are Strong Context Generators,” 2022, arXiv:2209.10063.
S. Yao et al., “ReAct: Synergizing Reasoning and Acting in Language Models,” 2023, arXiv:2210.03629.
O. Press, M. Zhang, S. Min, L. Schmidt, N. A. Smith, and M. Lewis, “Measuring and Narrowing the Compositionality Gap in Language Models,” 2022, arXiv:2210.03350.
O. Khattab et al., “Demonstrate-Search-Predict: Composing Retrieval and Language Models for Knowledge-intensive NLP,” 2022, arXiv:2212.14024.
W. Shi et al., “RePlug: Retrieval-augmented Black-box Language Models,” 2023, arXiv:2301.12652.
Z. Li, B. Peng, P. He, M. Galley, J. Gao, and X. Yan, “Guiding Large LanguageModels via Directional Stimulus Prompting,” in 37th Conf. Neural Inf. Process. Syst. (NeurIPS 2023), New Orleans, Louisiana, USA, 2003, pp. 1 - 27.
J. Wei et al., “Finetuned Language Models are Zero-shot Learners,” 2022, arXiv:2109.01652.
T. B. Brown et al., “Language Models are Few-shot Learners,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, Eds., Curran Associates, Inc., 2020, vol. 33, pp. 1877 - 1901.
J. Wei et al., “Chain-of-thought Prompting Elicits Reasoning in Large Language Models,” in Adv. Neural Inf. Process. Syst., S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, Eds.,Curran Associates, Inc., 2022, vol. 35, pp. 24824 - 24837.
J. Chen et al., “M3-Embedding: Multi-lingual, Multi-functionality, Multi-granularity Text Embeddings through Self-knowledge Distillation,” 2024, arXiv:2402.03216.
L. Merrick, D. Xu, G. Nuti, and D. Campos, “D. Arctic-Embed: Scalable, Efficient, and Accurate Text Embedding Models,” 2024, arXiv:2405.05374.
M. Enis and M. Hopkins, “From LLM to NMT: Advancing Low-Resource Machine Translation with Claude,” 2024, arXiv:2404.13813.