This is the official PyTorch implementation of the paper "FastHMR: Accelerating Human Mesh Recovery via Token and Layer Merging with Diffusion Decoding" (WACV 2026)
- Dec 21, 2025: 🔥Release code of demo and model checkpoints.
- Nov 9, 2025: Our work has been accepted to WACV 2026!
- Oct 13, 2025: We propose FastHMR, accelerating human mesh recovery up to 2.3x while slightly improving performance over the baseline.
- Install the project dependencies:
pip install -r requirements.txt
pip install --upgrade pip setuptools wheel # Necessary to avoid build issues with PyTorch3D
pip install --no-build-isolation "git+https://github.com/facebookresearch/pytorch3d.git@stable"- Download the pretrained weights using huggingface_hub:
pip install huggingface_hub
hf download SoroushMehraban/FastHMR --local-dir ./checkpointsYou also need to register at https://camerahmr.is.tue.mpg.de/ and download the dependencies that both CameraHMR and HMR2.0 require:
bash ./download_cam_model.sh- Run the demo on a video:
python demo.py --video path_to_your_video.mp4 --output_pth output_directory --backbone-name camerahmr --visualizeIn output_directory, you should see:
- mesh_results_<video_name>.pkl: The pickle file containing the reconstructed meshes (vertices and joints) for each detected person in the video.
- tracking_results_<video_name>.pkl: The joblib file containing the tracking results and HMR features for each detected person in the video (Before passing to the diffusion decoder).
- <video_name>_unified.mp4: The output video visualizing all detected people and their reconstructed meshes (if
--visualizeflag is set).