hrcak mascot   Srce   HID

Original scientific paper

The Missing Information Principle in Computer Vision

J. Hornegger ; Lehrstuhl fur Mustererkennung (lnformatik 5), Universitat Erlangen-Nurnberg, Erlangen, Germany
H. Niemann ; Lehrstuhl fur Mustererkennung (lnformatik 5), Universitat Erlangen-Nurnberg, Erlangen, Germany

Fulltext: english, pdf (4 MB) pages 201-209 downloads: 90* cite
APA 6th Edition
Hornegger, J. & Niemann, H. (1994). The Missing Information Principle in Computer Vision. Journal of computing and information technology, 2 (3), 201-209. Retrieved from https://hrcak.srce.hr/150459
MLA 8th Edition
Hornegger, J. and H. Niemann. "The Missing Information Principle in Computer Vision." Journal of computing and information technology, vol. 2, no. 3, 1994, pp. 201-209. https://hrcak.srce.hr/150459. Accessed 3 Apr. 2020.
Chicago 17th Edition
Hornegger, J. and H. Niemann. "The Missing Information Principle in Computer Vision." Journal of computing and information technology 2, no. 3 (1994): 201-209. https://hrcak.srce.hr/150459
Harvard
Hornegger, J., and Niemann, H. (1994). 'The Missing Information Principle in Computer Vision', Journal of computing and information technology, 2(3), pp. 201-209. Available at: https://hrcak.srce.hr/150459 (Accessed 03 April 2020)
Vancouver
Hornegger J, Niemann H. The Missing Information Principle in Computer Vision. Journal of computing and information technology [Internet]. 1994 [cited 2020 April 03];2(3):201-209. Available from: https://hrcak.srce.hr/150459
IEEE
J. Hornegger and H. Niemann, "The Missing Information Principle in Computer Vision", Journal of computing and information technology, vol.2, no. 3, pp. 201-209, 1994. [Online]. Available: https://hrcak.srce.hr/150459. [Accessed: 03 April 2020]

Abstracts
Central problems in the field of computer vision are learning object models from examples, classification, and localization of objects. Jn this paper we will motivate the use of a classical statistical approach to deal with these problems: the missing information principle. Based on this general technique we derive the Expectation Maximization algorithm and deduce statistical methods for learning objects from invariant features using Hidden Markov Models and from non-invariant features using Gaussian mixture density functions. The derived training algorithms will also include the problem of learning 3D objects from two-dimensional views. Furthermore, it is shown how the position and orientation of a three-dimensional object can be computed. The paper concludes with some experimental results.

Keywords
Expectation Maximization algorithm; Hidden Markov Models; statistical object recognition

Hrčak ID: 150459

URI
https://hrcak.srce.hr/150459

Visits: 133 *