Academia.eduAcademia.edu

Non-sparse feature mixing in object classification (パターン認識・メディア理解)

2009, 電子情報通信学会技術研究報告

Abstract

Recent research has shown that combining various image features significantly improves the object classification performance. Multiple kernel learning (MKL) approaches, where the mixing weights at the kernel level are optimized simultaneously with the classifier parameters, give a well founded framework to control the importance of each feature. As alternatives, we can also use boosting approaches, where single kernel classifier outputs are combined with the optimal mixing weights. Most of those approaches employ an ℓ 1regularization on the mixing weights that promote sparse solutions. Although sparsity offers several advantages, e.g., interpretability and less calculation time in test phase, the accuracy of sparse methods is often even worse than the simplest flat weights combination. In this paper, we compare the accuracy of our recently developed non-sparse methods with the standard sparse counterparts on the PASCAL VOC