Izvorni znanstveni članak
https://doi.org/10.20532/cit.2024.1005778
A Brief Survey on Safety of Large Language Models
Zhengjie Gao
orcid.org/0000-0003-0686-4611
; School of Electronic and Information Engineering, Geely, University of China, Chengdu, China
*
Xuanzi Liu
; School of Electronic and Information Engineering, Geely, University of China, Chengdu, China
Yuanshuai Lan
; School of Electronic and Information Engineering, Geely, University of China, Chengdu, China
Zheng Yang
; School of Electronic and Information Engineering, Geely, University of China, Chengdu, China
* Dopisni autor.
Sažetak
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) and have been widely adopted in various applications such as machine translation, chatbots, text summarization, and so on. However, the use of LLMs has raised concerns about their potential safety and security risks. In this survey, we explore the safety implications of LLMs, including ethical considerations, hallucination, and prompt injection. We also discuss current research efforts to mitigate these risks and identify areas for future research. Our survey provides a comprehensive overview of the safety concerns related to LLMs, which can help researchers and practitioners in the NLP community develop more safe and ethical applications of LLMs.
Ključne riječi
large language models; safety; hallucination; prompt injection
Hrčak ID:
319265
URI
Datum izdavanja:
15.7.2024.
Posjeta: 352 *