Joint Recovery of Dense Correspondence and
Cosegmentation in Two Images

Tatsunori Taniai1  Sudipta Sinha2  Yoichi Sato1
1The University of Tokyo, Japan
2Microsoft Research


CVPR 2016

Method overview

 

 

Abstract -- We propose a new technique to jointly recover cosegmentation and dense per-pixel correspondence in two images. Our method parameterizes the correspondence field using piecewise similarity transformations and recovers a mapping between the estimated common “foreground” regions in the two images allowing them to be precisely aligned. Our formulation is based on a hierarchical Markov random field model with segmentation and transformation labels. The hierarchical structure uses nested image regions to constrain inference across multiple scales. Unlike prior hierarchical methods which assume that the structure is given, our proposed iterative technique dynamically recovers the structure along with the labeling. This joint inference is performed in an energy minimization framework using iterated graph cuts. We evaluate our method on a new dataset of 400 image pairs with manually obtained ground truth, where it outperforms state-of-the-art methods designed specifically for either cosegmentation or correspondence estimation.

 
We released our new dataset and evaluation kit!!

Paper PDF      Poster PDF
 
  • Paper [pdf]
  • Extended supplementary [pdf]
  • Poster at CVPR 2016 [pdf]
  • Dataset (1.2GB) [zip (USA)] [zip (JPN)]
  • Evaluation and Visualization tools (MATLAB/C++/WinBin) [GitHub]
  • Benchmark results of all methods (2.1GB / uncomp 7.8GB) [rar (USA)] [rar (JPN)]
  • Excel score sheets and flow accuracy plots of all methods [zip]
  • Demonstration executable binaries now available!! (WinBin) [GitHub]
 

Supplementary Video

 

 

Score Correction of Benchmark

  We identified a bug in our initial evaluation code and because of this segmentation accuracy numbers in Table 1 were incorrect. Although the changes of the scores are small and they do not change the relative performances of the methods, we recommend to use the corrected numbers shown in the following table (PDF), if you want to use our dataset and evaluation results in your work. This issue has been fixed in the latest evaluation tool and score sheets in the above links.  

 

Correct Table1