Training robots to perform dexterous real-world tasks still depends on one thing that is difficult to scale: high-quality human manipulation data.
RLWRLD, a South Korean Physical AI startup recently featured by the Associated Press, is building datasets around that constraint. Rather than relying on internet-scale text or video, the company goes directly to skilled workers. At Lotte Hotel Seoul, a worker folded banquet napkins while wearing body-mounted cameras. The same approach is now running in CJ Logistics warehouses and Lawson convenience stores — real tasks, captured as they happen.
The goal is to build AI systems that can transfer human dexterity into robotic platforms across industrial and service applications.
RLWRLD utilizes MANUS gloves into their dexterous manipulation data collection workflow.
MANUS gloves capture full hand and finger articulation and stream that data directly into teleoperation, simulation, or dataset collection environments. For workflows that involve repeated object interactions or demonstrations across multiple operators, consistent tracking fidelity is what makes the resulting datasets usable downstream.
RLWRLD's aim is not simply to record human motion visually. It is to preserve the physical behavior embedded in skilled work — the grip adjustments, the contact timing, the coordination between fingers that define how an experienced worker actually handles an object. MANUS gloves are the layer in the pipeline that makes that capture reliable.
The MANUS gloves are already in use across robotics and embodied AI research pipelines involving imitation learning, teleoperation, and synthetic data generation.
The demand for dexterous manipulation datasets continues to grow as robotics labs and companies push toward more capable robotic hands and humanoid systems.
Large language models were trained on text collected from the internet. Physical AI systems require something different: real interaction data grounded in the physical world. That includes how workers fold textiles, organize shelves, grasp irregular objects, and coordinate fine motor actions across repetitive tasks. Much of this knowledge is not explicitly describable, yet it is naturally expressed through human movement.
By combining human demonstrations with infrastructure that captures hand motion at the level of articulation, companies like RLWRLD are building datasets intended for the next generation of robot learning research. MANUS gloves support that process by providing the finger tracking fidelity that transforms demonstrations into training-ready data.