Joint Recovery of Dense Correspondence and
Cosegmentation in Two Images

Tatsunori Taniai1  Sudipta Sinha2  Yoichi Sato1
1The University of Tokyo, Japan
2Microsoft Research


CVPR 2016

Method overview

 

 

Abstract -- We propose a new technique to jointly recover cosegmentation and dense per-pixel correspondence in two images. Our method parameterizes the correspondence field using piecewise similarity transformations and recovers a mapping between the estimated common “foreground” regions in the two images allowing them to be precisely aligned. Our formulation is based on a hierarchical Markov random field model with segmentation and transformation labels. The hierarchical structure uses nested image regions to constrain inference across multiple scales. Unlike prior hierarchical methods which assume that the structure is given, our proposed iterative technique dynamically recovers the structure along with the labeling. This joint inference is performed in an energy minimization framework using iterated graph cuts. We evaluate our method on a new dataset of 400 image pairs with manually obtained ground truth, where it outperforms state-of-the-art methods designed specifically for either cosegmentation or correspondence estimation.

 
We released our new dataset and evaluation kit!!

Paper PDF      Poster PDF
 
  • Paper [pdf]
  • Extended supplementary [pdf]
  • Poster at CVPR 2016 [pdf]
  • Dataset (1.2GB) [zip]
  • Evaluation and Visualization tools (MATLAB/C++/WinBin) [GitHub]
  • Benchmark results of all methods (2.1GB / uncomp 7.8GB) [rar]
  • Excel score sheets and flow accuracy plots of all methods [zip]
  • Excel score sheets by the precision metric for segmentation accuracy (not shown in the paper) [zip]
  • Demonstration executable binaries now available!! (WinBin) [GitHub]
 

Supplementary Video

 

 

Score Correction of Benchmark

  We identified a bug in our initial evaluation code. Because of this, the segmentation accuracy numbers in Table 1 were incorrect. The corrected numbers are shown in the following table (PDF). The changes of the scores are small and they do not change the relative performances of the methods, compared to the originally reported results. If you want to use our dataset and evaluation results in your work, please cite the corrected vesion. This issue has been fixed in the latest evaluation tool and score sheets in the above links.  

 

Correct Table1