Skoči na glavni sadržaj

Izvorni znanstveni članak

https://doi.org/10.2498/cit.2001.01.01

Training Artificial Neural Networks: Backpropagation via Nonlinear Optimization

K. Wendy Tang
Jadranka Skorin-Kapov


Puni tekst: engleski pdf 425 Kb

str. 1-14

preuzimanja: 2.484

citiraj


Sažetak

In this paper we explore different strategies to guide backpropagation algorithm used for training artificial neural networks. Two different variants of steepest descent-based backpropagation algorithm, and four different variants of conjugate gradient algorithm are tested. The variants differ whether or not the time component is used, and whether or not additional gradient information is utilized during one-dimensional optimization. Testing is performed on randomly generated data as well as on some benchmark data regarding energy prediction. Based on our test results, it appears that the most promissing backpropagation strategy is to initially use steepest descent algorithm, and then continue with conjugate gradient algorithm. The backpropagation through time strategy combined with conjugate gradients appears to be promissing as well.

Ključne riječi

Hrčak ID:

44817

URI

https://hrcak.srce.hr/44817

Datum izdavanja:

30.3.2001.

Posjeta: 3.004 *