Hybrid Neural Network Training on Diverse Hardware
DOI:
https://doi.org/10.54820/entrenova-2024-0023Klíčová slova:
Neural Networks, CPU, GPU, SSD, RAM, memory spritesAbstrakt
This study presents a comprehensive comparison of neural network training hardware structures, focusing on the performance of CPU and GPU processors under varying data storage conditions (SSD drive and RAM disk). Initially, the training efficiency and speed of neural networks are analysed using a CPU processor, with data stored first on an SSD drive and subsequently in a RAM disk to evaluate the impact of data retrieval speeds on training times and accuracy. The analysis is then extended to GPU processors, renowned for their superior parallel processing capabilities, under identical data storage conditions to discern the benefits and limitations of each hardware setup in neural network training scenarios. Additionally, a novel hybrid architecture is proposed, combining either CPU or GPU processors with the concept of memory sprites—a technique borrowed from the age of video game development for optimizing graphics on less capable hardware. This approach aims to leverage the advantages of both processing units while mitigating their weaknesses, offering a potentially superior solution for training complex neural networks efficiently on diverse hardware platforms.
Stahování
Publikováno
Jak citovat
Číslo
Sekce
Licence
Copyright (c) 2024 ENTRENOVA - ENTerprise REsearch InNOVAtion
Tato práce je licencována pod Mezinárodní licencí Creative Commons Attribution-NonCommercial 4.0.