Skip to the main content

Original scientific paper

https://doi.org/10.20532/cit.2024.1005778

A Brief Survey on Safety of Large Language Models

Zhengjie Gao orcid id orcid.org/0000-0003-0686-4611 ; School of Electronic and Information Engineering, Geely, University of China, Chengdu, China *
Xuanzi Liu ; School of Electronic and Information Engineering, Geely, University of China, Chengdu, China
Yuanshuai Lan ; School of Electronic and Information Engineering, Geely, University of China, Chengdu, China
Zheng Yang ; School of Electronic and Information Engineering, Geely, University of China, Chengdu, China

* Corresponding author.


Full text: english pdf 688 Kb

page 47-64

downloads: 136

cite


Abstract

Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) and have been widely adopted in various applications such as machine translation, chatbots, text summarization, and so on. However, the use of LLMs has raised concerns about their potential safety and security risks. In this survey, we explore the safety implications of LLMs, including ethical considerations, hallucination, and prompt injection. We also discuss current research efforts to mitigate these risks and identify areas for future research. Our survey provides a comprehensive overview of the safety concerns related to LLMs, which can help researchers and practitioners in the NLP community develop more safe and ethical applications of LLMs.

Keywords

large language models; safety; hallucination; prompt injection

Hrčak ID:

319265

URI

https://hrcak.srce.hr/319265

Publication date:

15.7.2024.

Visits: 352 *