Skoči na glavni sadržaj

Izvorni znanstveni članak

https://doi.org/10.17559/TV-20231218001216

Tokenization and Memory Optimization for Reducing GPU Load in NLP Deep Learning Models

Dejan Dodić ; The Academy of Applied Technical and Preschool Studies, Department of Information - communication technologies, Beogradska 18, Niš, Serbia *
Dušan Regodić ; MB University, Faculty of Business and Law, Department of Advanced information technologies, Teodora Drajzera 27, Belgrade, Serbia

* Dopisni autor.


Puni tekst: engleski pdf 1.802 Kb

str. 1995-2002

preuzimanja: 5

citiraj


Sažetak

In the current landscape of advanced natural language processing (NLP), managing GPU memory effectively is crucial. This paper delves into new tokenization methods and data handling to enhance NLP model efficiency, focusing on avoiding "CUDA out of memory" errors. It examines how sophisticated tokenization and managing text lengths in large datasets can boost model performance. These insights are vital for optimizing resources and scaling NLP models, especially with limited GPU memory. The paper also contextualizes NLP challenges, underlining the significance of memory optimization amidst growing language model complexities. It reviews key NLP technologies, including transformer models, and addresses their memory optimization challenges. Moreover, it underscores the paper's role in developing innovative techniques for more effective memory optimization, linking it to ongoing research and trends in NLP. This work aims to progress natural language processing methods and make AI technologies more accessible.

Ključne riječi

data tokenization; deep learning; cuda out of memory; gpu memory optimization; machine learning; natural language processing (nlp)

Hrčak ID:

321922

URI

https://hrcak.srce.hr/321922

Datum izdavanja:

31.10.2024.

Posjeta: 9 *