Skip to the main content

Review article

https://doi.org/10.19279/TVZ.PD.2024-12-1-06

CONCEPTUAL DISCUSSION OF EXPLAINABILITY OF SUPERVISED FEATURE LEARNING FOR CLASSIFICATION

Dino Vlahek ; UM FERI, Koroška cesta 46, 2000 Maribor, Slovenia *
Bojan Nožica ; Zagreb University of Applied Science, Vrbik 8, Zagreb, Croatia *

* Corresponding author.


Full text: croatian pdf 467 Kb

page 43-50

downloads: 0

cite


Abstract

This paper presents the basic ideas of supervised feature learning for classification. Special attention is given to the explainability of these approaches. Feature learning methods are either inexplicable or limited in their prediction results due to the inability to recombine input features. Approaches that increase the dimensionality of the input feature space are slow because they require iterative non-convex optimizations and tuning of numerous configurations of hidden dimensions. In these cases, authors generally do not provide explanation of the learned model. However, explanations can be achieved in various degrees of success by learning interpretive models around a given pattern of interest or by evaluating the importance of each feature in the classification output.

Keywords

explainable artificial intelligence; classification; feature learning; knowledge discovery

Hrčak ID:

326434

URI

https://hrcak.srce.hr/326434

Publication date:

15.3.2024.

Article data in other languages: croatian

Visits: 0 *