EPOCH: Jointly Estimating the 3D Pose of Cameras and Humans

1University of Trento, 2Epic Games
Teaser image.

(a) In human pose estimation, classical approaches perform a direct regression of the 2D/3D joint location directly from an image. If the ground truth is available, the camera parameters can be used/learned to refine the accuracy. (b) Lifting approaches aim at retrieving the depth of each 2D joint to obtain the 3D pose. (c) We propose a novel paradigm, directly estimating the 3D pose and the camera from images. The 2D pose can be calculated by applying the projection of the 3D coordinates to the image space using the camera parameters. (d) Starting from the estimated 2D poses and camera parameters, we perform the lifting to 3D, improving the performances with respect to current approaches.

Abstract

Monocular Human Pose Estimation (HPE) aims at determining the 3D positions of human joints from a single 2D image captured by a camera. However, a single 2D point in the image may correspond to multiple points in 3D space. Typically, the uniqueness of the 2D-3D relationship is approximated using an orthographic or weak-perspective camera model. In this study, instead of relying on approximations, we advocate for utilizing the full perspective camera model. This involves estimating camera parameters and establishing a precise, unambiguous 2D-3D relationship.

To do so, we introduce the EPOCH framework, comprising two main components: the pose lifter network (LiftNet) and the pose regressor network (RegNet). LiftNet utilizes the full perspective camera model to precisely estimate the 3D pose in an unsupervised manner. It takes a 2D pose and camera parameters as inputs and produces the corresponding 3D pose estimation. These inputs are obtained from RegNet, which starts from a single image and provides estimates for the 2D pose and camera parameters. RegNet utilizes only 2D pose data as weak supervision. Internally, RegNet predicts a 3D pose, which is then projected to 2D using the estimated camera parameters. This process enables RegNet to establish the unambiguous 2D-3D relationship.

Our experiments show that modeling the lifting as an unsupervised task with a camera in-the-loop results in better generalization to unseen data. We obtain state-of-the-art results for the 3D HPE on the Human3.6M and MPI-INF-3DHP datasets.

RegNet

RegNet.

The W × H input image is fed to (a) a contrastive-pretrained encoder and a separate module Ψ that estimates the intrinsic parameters. The output features are then concatenated and (b) fed into our attention-based capsule decoder. The outputs are three separate capsule vectors, representing an estimation of the 3D pose ŷ, of the camera [K] [R|t] and a joint presence vector Σ. (c) Each of the outputs needs to be further processed before the loss computation. A copy of ŷ is randomly rotated around the vertical axis, obtaining ŷr. ŷ and ŷr are projected into the camera plane and Σ goes through a sigmoid activation function. (d) ŷ, r, , and σ̂ are fed to the loss functions.

LiftNet

LiftNet.

The red (2D → 3D), orange ( and ) and yellow (3D → 2D) blocks describe the Lift, Rotate, and Project operations respectively. The symbol x denotes a 2D pose, y denotes a 3D pose. The decorator ̂ symbolizes a prediction in the forward pass while ~ marks a prediction in the backward pass. The subscript r stands for rotated. The solid arrows describe the flow of the network, while the dashed arrows connect each intermediate datum to its loss.

Comparison with state-of-the-art

Comparison with state-of-the-art.

EPOCH is able to achieve the best results in terms of MPJPE on multiple tasks.

Ablation studies

Ablation studies.

Ablation studies for EPOCH, both for the RegNet and the LiftNet architectures.

Qualitative results

Qualitative results.

EPOCH qualitative results on MPI-INF-3DHP (columns 1, 2, 3, 4), 3DPW (columns 5, 6). Rows: input images, RegNet output, LiftNet output (front and side view). Our method can generalize to unseen in-the-wild data (3DPW) even if only trained on Human3.6M data.

BibTeX

Coming soon