Wei-Chih Hung Varun Jampani Sifei Liu Pavlo Molchanov Ming-Hsuan Yang Jan Kautz
Parts provide a good intermediate representation of objects that is robust with respect to the camera, pose and appearance variations. Existing works on part segmentation is dominated by supervised approaches that rely on large amounts of manual annotations and can not generalize to unseen object categories. We propose a self-supervised deep learning approach for part segmentation, where we devise several loss functions that aids in predicting part segments that are geometrically concentrated, robust to object variations and are also semantically consistent across different object instances. Extensive experiments on different types of image collections demonstrate that our approach can produce part segments that adhere to object boundaries and also more semantically consistent across object instances compared to existing self-supervised techniques.
Please consider citing if you make use of this work and/or the corresponding code:
@inproceedings{hung:CVPR:2019,
title = {SCOPS: Self-Supervised Co-Part Segmentation},
author = {Hung, Wei-Chih and Jampani, Varun and Liu, Sifei and Molchanov, Pavlo and Yang, Ming-Hsuan and Kautz, Jan},
booktitle = {IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
month = june,
year = {2019}
}
SCOPS is implemented using pytorch neural network framework. Code is available in this github repository: https://github.com/NVlabs/SCOPS.