Weakly-supervised hand part segmentation from depth images
View/ Open
Date
2021-07-02Author
Rezaei, Mohammad
Farahanipad, Farnaz
Dilhoff, Alex
Athitos, Vassilis
Elmasri, Ramez
Metadata
Show full item recordAbstract
Existing learning-based methods require a large number of labeled
data to produce accurate part segmentation labels. However, acquiring ground truth labels is costly, giving rise to a need for methods
that either require fewer labels or can utilize other currently available labels as a form of weak supervision for training. In this paper,
in order to mitigate the burden of labeled-data acquisition, we propose a data-driven method for hand part segmentation on depth
maps without any need for extra effort to obtain segmentation
labels. The proposed method uses the labels already provided by
public datasets in terms of major 3D hand joint locations to learn
to estimate the hand shape and pose given a depth map. Given
the pose and shape of a hand, the corresponding 3D hand mesh is
generated using a deformable hand model and then rendered to a
color image using a texture based on Linear Blend Skinning (LBS)
weights of the hand model. The segmentation labels are then computed from the rendered color image. Since segmentation labels are
not provided with current public datasets, we manually annotate
a subset of the NYU dataset to perform quantitative evaluation of
our method and show that a mIoU of 42% can be achieved with
a model trained without using segmentation-based labels. Both
qualitative and quantitative results confirm the effectiveness of our
method.