All Use Cases

China’s Baihu Data Hub: Powering Dexterous Robots with MANUS gloves

September 4, 2025
Robotics
ENTERTAINMENT
Other Fields
XR/VR
Research

Products:

Background

China’s National and Local Co-Built Humanoid Robotics Innovation Center has launched the Baihu Data Hub, an open-source platform providing high-fidelity human-motion datasets to accelerate humanoid robotics. Its flagship Qinglong robot is China’s first open-source general-purpose humanoid. Unveiled in 2024, Qinglong collects and demonstrates skill-acquisition data using embodied AI and collaborative annotation.

The Challenge: Scarce Machine-Grade Motion Data

Training humanoid robots requires massive datasets of human-like manipulation. But the most valuable data—precisely labeled hand-object interactions—is also the hardest to obtain:

  • Internet videos: plentiful but unannotated.
  • First-person VR data: more relevant but harder to obtain.
  • True robot-level data: most accurate yet rare and expensive.

This data bottleneck slows progress toward robots with human-level dexterity.

The MANUS Solution: Precise Finger Tracking at Scale

To overcome the data bottleneck,  China’s first open-source humanoid, Qinglong, integrates MANUS Quantum Metagloves with full-body motion capture.

  • Accurate, drift-free finger tracking even when hands overlap or leave camera view.
  • Hand–body synchronization so every reach and grip is perfectly time-locked across the skeleton.
  • Standardized CSV export for immediate use in large-scale AI training pipelines.

These capabilities create the rare, high-quality datasets needed for embodied AI.

Delegated Action Sequencing

Robots learn fine motor skills by breaking complex tasks into small, labeled micro-actions. Take plugging in a cable as an example. The robot performs a sequence of hand-object interactions to plug a power cord into an outlet, consisting of grasping, aligning, and inserting motions.

These micro-movements form a rich dataset of hand–object interactions for imitation and reinforcement learning.

From Motion to Dexterity

Captured and labeled micro-movements flow through a structured AI process:

  1. Imitation learning, where robots copy human motion.
  2. Control-theory optimization, during which timing and force are fine-tuned.
  3. Reinforcement learning, enabling robots to improve through trial and reward.

This staged approach transforms motion data into human-level dexterity.

Ready to bring motion to life?
Join studios and creative teams worldwide using MANUS for high-fidelity motion capture and real-time performance control.
Ready to access machine-grade motion data?
Join leading labs worldwide using MANUS for embodied AI.
Ready to shape the future of immersive interaction?
Join XR pioneers using MANUS Gloves for lifelike hand interaction and motion fidelity in virtual environment.