Modality-Invariant Image Classification

Abstract

    MIIC
  • We present a unified framework for image classification of image sets taken under varying modality conditions.

    Our approach is motivated by a key observation that the image feature distribution is simultaneously influenced by the

    semantic-class and the modality category label, which limits the performance of conventional methods for this task. With this

    insight, we introduce modality uniqueness as a discriminative weight that divides each modality cluster from all other clusters.

    By leveraging the modality uniqueness, our framework is formulated as unsupervised modality clustering and classifier learning

    based on modality-invariant similarity kernel. Specifically, in the assignment step, training images are first assigned to the most

    similar cluster in terms of modality. In the update step, based on the current cluster hypothesis, the modality uniqueness and the

    sparse dictionary are updated. These two steps are formulated in an iterative manner. Based on the final clusters, a modality-

    invariant marginalized kernel is then computed, where the similarities between the reconstructed features of each modality

    are aggregated across all clusters. Our framework enables the reliable inference of semantic-class category for an image, even

    across large photometric variations. Experimental results show that our method outperforms conventional methods on various

    benchmarks, e.g., landmark identification under severely varying weather conditions, domain-adapting image classification,

    and RGB-NIR image classification.

Publication

  • Modality-Invariant Image Classification Based on Modality Uniqueness and Dictionary Learning

    Seungryong Kim, Rui Cai, Kihong Park, Sunok Kim, and Kwanghoon Sohn

    IEEE Transactions on Image Processing (submitted)

    Paper: high-res version (12MB) [pdf]

[Top]

Download

[Top]