5th International Conference on Image Analysis and Recognition, Povoa de Varzim, Portekiz, 25 - 27 Haziran 2008, cilt.5112, ss.445-446
Stacked Generalization (SG) is an ensemble learning technique, which aims to increase the performance of individual classifiers by combining them under a hierarchical architecture. In many applications, this technique performs better than the individual classifiers. However, in some applications, the performance of the technique goes astray, for the reasons that are not well-known. In this work, the performance of Stacked Generalization technique is analyzed with respect to the performance of the individual classifiers under the architecture. This work shows that the success of the SG highly depends on how the individual classifiers share to learn the training set, rather than the performance of the individual classifiers. The experiments explore the learning mechanisms of SG to achieve the high performance. The relationship between the performance of the individual classifiers and that of SG is also investigated.