Hybrid Neural Network Training on Diverse Hardware
DOI:
https://doi.org/10.54820/entrenova-2024-0023Keywords:
Neural Networks, CPU, GPU, SSD, RAM, memory spritesAbstract
This study presents a comprehensive comparison of neural network training hardware structures, focusing on the performance of CPU and GPU processors under varying data storage conditions (SSD drive and RAM disk). Initially, the training efficiency and speed of neural networks are analysed using a CPU processor, with data stored first on an SSD drive and subsequently in a RAM disk to evaluate the impact of data retrieval speeds on training times and accuracy. The analysis is then extended to GPU processors, renowned for their superior parallel processing capabilities, under identical data storage conditions to discern the benefits and limitations of each hardware setup in neural network training scenarios. Additionally, a novel hybrid architecture is proposed, combining either CPU or GPU processors with the concept of memory sprites—a technique borrowed from the age of video game development for optimizing graphics on less capable hardware. This approach aims to leverage the advantages of both processing units while mitigating their weaknesses, offering a potentially superior solution for training complex neural networks efficiently on diverse hardware platforms.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 ENTRENOVA - ENTerprise REsearch InNOVAtion
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.