Grégory Rogez    Philippe Weinzaepfel    Cordelia Schmid


We propose an end-to-end architecture for joint 2D and 3D human pose estimation in natural images. Key to our approach is the generation and scoring of a number of pose proposals per image, which allows us to predict 2D and 3D pose of multiple people simultaneously. Hence, our approach does not require an approximate localization of the humans for initialization. Our architecture, named LCR-Net, contains 3 main components: 1) the pose proposal generator that suggests potential poses at different locations in the image; 2) a classifier that scores the different pose proposals ; and 3) a regressor that refines pose proposals both in 2D and 3D. All three stages share the convolutional feature layers and are trained jointly. The final pose estimation is obtained by integrating over neighboring pose hypotheses , which is shown to improve over a standard non maximum suppression algorithm. Our approach significantly outperforms the state of the art in 3D pose estimation on Human3.6M, a controlled environment. Moreover, it shows promising results on real images for both single and multi-person subsets of the MPII 2D pose benchmark.


Please note that our code is released only for scientific or personal use.

We only provide code for testing our models, not for training.



To test our model trained on Human 3.6M:

python h36m_100_p2 Directions1_S11_C1_1.jpg 0

To test our model for in the wild pose detection:

python mix_200x2 058017637.jpg 0

The arguments are:



If you use our code, please cite our CVPR'17 paper:

  TITLE = {{LCR-Net: Localization-Classification-Regression for Human Pose}},
  AUTHOR = {Rogez, Gregory and Weinzaepfel, Philippe and Schmid, Cordelia},
  ADDRESS = {Honolulu, United States},
  YEAR = {2017},
  MONTH = July,