hrcak mascot   Srce   HID

Izvorni znanstveni članak

One Step Strategy for Learning RBF Network Parameters

Mladen Široki ; Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, Zagreb, Croatia

Puni tekst: engleski, pdf (5 MB) str. 245-254 preuzimanja: 61* citiraj
APA 6th Edition
Široki, M. (1995). One Step Strategy for Learning RBF Network Parameters. Journal of computing and information technology, 3 (4), 245-254. Preuzeto s https://hrcak.srce.hr/150415
MLA 8th Edition
Široki, Mladen. "One Step Strategy for Learning RBF Network Parameters." Journal of computing and information technology, vol. 3, br. 4, 1995, str. 245-254. https://hrcak.srce.hr/150415. Citirano 23.02.2020.
Chicago 17th Edition
Široki, Mladen. "One Step Strategy for Learning RBF Network Parameters." Journal of computing and information technology 3, br. 4 (1995): 245-254. https://hrcak.srce.hr/150415
Harvard
Široki, M. (1995). 'One Step Strategy for Learning RBF Network Parameters', Journal of computing and information technology, 3(4), str. 245-254. Preuzeto s: https://hrcak.srce.hr/150415 (Datum pristupa: 23.02.2020.)
Vancouver
Široki M. One Step Strategy for Learning RBF Network Parameters. Journal of computing and information technology [Internet]. 1995 [pristupljeno 23.02.2020.];3(4):245-254. Dostupno na: https://hrcak.srce.hr/150415
IEEE
M. Široki, "One Step Strategy for Learning RBF Network Parameters", Journal of computing and information technology, vol.3, br. 4, str. 245-254, 1995. [Online]. Dostupno na: https://hrcak.srce.hr/150415. [Citirano: 23.02.2020.]

Sažetak
In this paper a new, one step strategy for learning Radial Basis Functions network parameters is proposed. In the RBF network model developed by Poggio and Girosi three modifiable sets of parameters: positions of the centers t, weighed norm ||x - t||w 2 and output layer weights c have to be determined during the learning stage. The authors suggest that these parameters be set by some iterative nonlinear optimization method, such as gradient descent, conjugate gradient or simulated annealing method. The basic idea of this work is: if hidden layer radial basis functions are set to be a multivariate Gaussian function, unknown parameters can be learned from the training set much faster, in a single step, by well known statistical methods, than by iterative optimization. In this approach the positions of the centers are learned by K-means clustering method, weighed norms are calculated as a Mahalanobis distances between x and t, and optimal output layer weights are found by pseudoinversion. Calculation of Mahalanobis distances involves estimation of hidden units covariance matrices sigma, that replace weighed matrices W. Two classification examples illustrate the usefulness of the method.

Ključne riječi
Neural Networks; Radial Basis Functions Networks; Learning; Classification

Hrčak ID: 150415

URI
https://hrcak.srce.hr/150415

Posjeta: 99 *