Skoči na glavni sadržaj

Izvorni znanstveni članak

https://doi.org/10.17559/TV-20150126122253

Performance of the fixed-point autoencoder

Jingfei Jiang ; Science and Technology on Parallel and Distributed Processing Laboratory, National University of Defense Technology, 109 DeYa Road, ChangSha, Hunan 410073, China
Rongdong Hu ; Science and Technology on Parallel and Distributed Processing Laboratory, National University of Defense Technology, 109 DeYa Road, ChangSha, Hunan 410073, China
Dongsheng Wang ; Science and Technology on Parallel and Distributed Processing Laboratory, National University of Defense Technology, 109 DeYa Road, ChangSha, Hunan 410073, China
Jinwei Xu ; Science and Technology on Parallel and Distributed Processing Laboratory, National University of Defense Technology, 109 DeYa Road, ChangSha, Hunan 410073, China
Yong Dou ; Science and Technology on Parallel and Distributed Processing Laboratory, National University of Defense Technology, 109 DeYa Road, ChangSha, Hunan 410073, China


Puni tekst: hrvatski pdf 1.036 Kb

str. 77-82

preuzimanja: 976

citiraj

Puni tekst: engleski pdf 1.036 Kb

str. 77-82

preuzimanja: 679

citiraj


Sažetak

The model of autoencoder is one of the most typical deep learning models that have been mainly used in unsupervised feature learning for many applications like recognition, identification and mining. Autoencoder algorithms are compute-intensive tasks. Building large scale autoencoder model can satisfy the analysis requirement of huge volume data. But the training time sometimes becomes unbearable, which naturally leads to investigate some hardware acceleration platforms like FPGA. The software versions of autoencoder often use single-precision or double-precision expressions. But the floating point units are very expensive to implement on FPGA. Fixed-point arithmetic is often used when implementing autoencoder on hardware. But the accuracy loss is often ignored and its implications for accuracy have not been studied in previous works. There are only some works focused on accelerators using some fixed bit-widths on other neural networks models. Our work gives a comprehensive evaluation to demonstrate the fix-point precision implications on the autoencoder, achieving best performance and area efficiency. The method of data format conversion, the matrix blocking methods and the complex functions approximation are the main factors considered according to the situation of hardware implementation. The simulation method of the data conversion, the matrix blocking with different parallelism and a simple PLA approximation method were evaluated in this paper. The results showed that the fixed-point bit-width did have effect on the performance of autoencoder. Multiple factors may have crossed effect. Each factor would have two-sided impacts for discarding the "abundant" information and the "useful" information at the same time. The representation domain must be carefully selected according to the computation parallelism. The result also showed that using fixed-point arithmetic can guarantee the precision of the autoencoder algorithm and get acceptable convergence speed.

Ključne riječi

AutoEncoder; deep learning; fixed-point arithmetic; FPGA

Hrčak ID:

153158

URI

https://hrcak.srce.hr/153158

Datum izdavanja:

19.2.2016.

Podaci na drugim jezicima: hrvatski

Posjeta: 3.077 *