Generative Modeling of Shape-Dependent Self-Contact Human Poses


Takehiko Ohkawa1,2*   Jihyun Lee1,3*   Shunsuke Saito1   Jason Saragih1   Fabian Prada1   Yichen Xu1   Shoou-I Yu1   Ryosuke Furuta2   Yoichi Sato2   Takaaki Shiratori1

1Codec Avatars Lab, Meta  2The University of Tokyo   3KAIST 

*Work done during the internship at Meta

IEEE/CVF International Conference on Computer Vision (ICCV), 2025





Abstract

One can hardly model self-contact of human poses without considering underlying body shapes. For example, the pose of rubbing a belly for a person with a low BMI leads to penetration of the hand into the belly for a person with a high BMI. Despite its relevance, existing self-contact datasets lack the variety of self-contact poses and precise body shapes, limiting conclusive analysis between self-contact poses and shapes. To address this, we begin by introducing the first extensive self-contact dataset with precise body shape registration, Goliath-SC, consisting of 383K self-contact poses across 130 subjects. Using this dataset, we propose generative modeling of self-contact prior conditioned by body shape parameters, based on a body-part-wise latent diffusion with self-attention. We further incorporate this prior into single-view human pose estimation while refining estimated poses to be in contact. Our experiments suggest that shape conditioning is vital to the successful modeling of self-contact pose distribution, hence improving single-view pose estimation in self-contact.


© Takehiko Ohkawa

< Home