Skoči na glavni sadržaj

Izvorni znanstveni članak

https://doi.org/10.20532/cit.2024.1005778

A Brief Survey on Safety of Large Language Models

Zhengjie Gao orcid id orcid.org/0000-0003-0686-4611 ; School of Electronic and Information Engineering, Geely, University of China, Chengdu, China *
Xuanzi Liu ; School of Electronic and Information Engineering, Geely, University of China, Chengdu, China
Yuanshuai Lan ; School of Electronic and Information Engineering, Geely, University of China, Chengdu, China
Zheng Yang ; School of Electronic and Information Engineering, Geely, University of China, Chengdu, China

* Dopisni autor.


Puni tekst: engleski pdf 688 Kb

str. 47-64

preuzimanja: 136

citiraj


Sažetak

Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) and have been widely adopted in various applications such as machine translation, chatbots, text summarization, and so on. However, the use of LLMs has raised concerns about their potential safety and security risks. In this survey, we explore the safety implications of LLMs, including ethical considerations, hallucination, and prompt injection. We also discuss current research efforts to mitigate these risks and identify areas for future research. Our survey provides a comprehensive overview of the safety concerns related to LLMs, which can help researchers and practitioners in the NLP community develop more safe and ethical applications of LLMs.

Ključne riječi

large language models; safety; hallucination; prompt injection

Hrčak ID:

319265

URI

https://hrcak.srce.hr/319265

Datum izdavanja:

15.7.2024.

Posjeta: 352 *