hrcak mascot   Srce   HID

Izvorni znanstveni članak
https://doi.org/10.2498/cit.2001.01.01

Training Artificial Neural Networks: Backpropagation via Nonlinear Optimization

K. Wendy Tang
Jadranka Skorin-Kapov

Puni tekst: engleski, pdf (425 KB) str. 1-14 preuzimanja: 2.276* citiraj
APA 6th Edition
Tang, K.W. i Skorin-Kapov, J. (2001). Training Artificial Neural Networks: Backpropagation via Nonlinear Optimization. Journal of computing and information technology, 9 (1), 1-14. https://doi.org/10.2498/cit.2001.01.01
MLA 8th Edition
Tang, K. Wendy i Jadranka Skorin-Kapov. "Training Artificial Neural Networks: Backpropagation via Nonlinear Optimization." Journal of computing and information technology, vol. 9, br. 1, 2001, str. 1-14. https://doi.org/10.2498/cit.2001.01.01. Citirano 01.10.2020.
Chicago 17th Edition
Tang, K. Wendy i Jadranka Skorin-Kapov. "Training Artificial Neural Networks: Backpropagation via Nonlinear Optimization." Journal of computing and information technology 9, br. 1 (2001): 1-14. https://doi.org/10.2498/cit.2001.01.01
Harvard
Tang, K.W., i Skorin-Kapov, J. (2001). 'Training Artificial Neural Networks: Backpropagation via Nonlinear Optimization', Journal of computing and information technology, 9(1), str. 1-14. https://doi.org/10.2498/cit.2001.01.01
Vancouver
Tang KW, Skorin-Kapov J. Training Artificial Neural Networks: Backpropagation via Nonlinear Optimization. Journal of computing and information technology [Internet]. 2001 [pristupljeno 01.10.2020.];9(1):1-14. https://doi.org/10.2498/cit.2001.01.01
IEEE
K.W. Tang i J. Skorin-Kapov, "Training Artificial Neural Networks: Backpropagation via Nonlinear Optimization", Journal of computing and information technology, vol.9, br. 1, str. 1-14, 2001. [Online]. https://doi.org/10.2498/cit.2001.01.01

Sažetak
In this paper we explore different strategies to guide backpropagation algorithm used for training artificial neural networks. Two different variants of steepest descent-based backpropagation algorithm, and four different variants of conjugate gradient algorithm are tested. The variants differ whether or not the time component is used, and whether or not additional gradient information is utilized during one-dimensional optimization. Testing is performed on randomly generated data as well as on some benchmark data regarding energy prediction. Based on our test results, it appears that the most promissing backpropagation strategy is to initially use steepest descent algorithm, and then continue with conjugate gradient algorithm. The backpropagation through time strategy combined with conjugate gradients appears to be promissing as well.

Hrčak ID: 44817

URI
https://hrcak.srce.hr/44817

Posjeta: 2.445 *