[Paper Reading] A CLOSER LOOK AT FEW-SHOT CLASSIFICATION

Ya-Liang Allen Chang
4 min readJun 6, 2019

Abstract

Present

  1. A consistent comparative analysis of several representative few-shot
    classification algorithms, with results showing that deeper backbones significantly reduce the performance differences among methods on datasets with limited domain differences
  2. A modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the mini ImageNet and the CUB datasets
  3. A new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms

Introduction

  • The strong performance of deep learning heavily relies on training a network with abundant labeled instances with diverse visual variations
  • High cost of data
  • Humans could learn with few data
  • Few-shot classification: learning to generalize to unseen classes during training
  • One promising solution: meta-learning paradigm where transferable knowledge is extracted and propagated from a collection of tasks to prevent overfitting and improve generalization
  • Another solution: directly predicting the weights of the classifiers for novel classes
  • Limitation: the discrepancy of the implementation details and lack of domain shift between the base and novel classes

Contributions

  • Provide a unified testbed for several different few-shot classification algorithms for a fair comparison. Their empirical evaluation results reveal that the use of a shallow backbone commonly used in existing work leads to favorable results for methods that explicitly reduce intra-class
    variation. Increasing the model capacity of the feature backbone reduces the performance gap between different methods when domain differences are limited
  • Show that a baseline method with a distance-based classifier surprisingly achieves competitive performance with the state-of-the-art meta-learning methods on both mini-ImageNet and CUB datasets
  • Investigate a practical evaluation setting where base and novel classes are sampled from different domains. They show that current few-shot classification algorithms fail to address such domain shifts and are inferior even to the baseline method, highlighting the importance of learning
    to adapt to domain differences in few-shot learning

Related Work

Initialization based methods

  • Tackle the few-shot learning problem by “learning to fine-tune”
  • Good model initialization
  • Learning an optimizer

Distance metric learning based methods

  • Address the few-shot classification problem by “learning to compare”
  • If a model can determine the similarity of two images, it can classify an unseen input image with the labeled instances

Hallucination based methods

  • Directly deal with data deficiency by “learning to augment”
  • Learns a generator from data in the base classes and use the learned generator to hallucinate new novel class data for data augmentation

Domain adaptation

  • Aim to reduce the domain shifts between source and target domain as well as novel tasks in a different domain

Overview of Few-shot Classification Algorithms

Baseline

Baseline++

Meta-learning Algorithms

Experimental Results

Scenarios/Datasets:

  1. Generic object recognition/mini-ImageNet
  2. Fine-grained image classification/CUB-200–2011
  3. Cross-domain adaptation/mini-ImageNet →CUB

Evaluation Using Standard Settings

Conclusion

They have investigated the limits of the standard evaluation setting for few-shot classification. Through comparing methods on a common ground, their results show that the Baseline++ model is competitive to state of art under standard conditions, and the Baseline model achieves competitive performance with recent state-of-the-art meta-learning algorithms on both CUB and mini-ImageNet benchmark datasets when using a deeper feature backbone. Surprisingly, the Baseline compares favorably against all the evaluated meta-learning algorithms under a realistic scenario where there exists domain shift between the base and novel classes.

Reference

Chen, Wei-Yu, et al. “A closer look at few-shot classification.” arXiv preprint arXiv:1904.04232 (2019).

--

--