DeVI iconDeVI: Physics-based Dexterous Human-Object
Interaction via Synthetic Video Imitation

Seoul National University1, RLWRLD2
Preprint

Given a physics environment with 3D human and objects along with an interaction text prompt, DeVI generates a physically plausible human-object interaction motion by using a video diffusion model as an interaction-aware motion planner.


Abstract

Recent advances in video generative models enable the synthesis of realistic human-object interaction videos across a wide range of scenarios and object categories, including complex dexterous manipulations that are difficult to capture with motion capture systems. While the rich interaction knowledge embedded in these synthetic videos holds strong potential for motion planning in dexterous robotic manipulation, their limited physical fidelity and purely 2D nature make them difficult to use directly as imitation targets in physics-based character control. We present DeVI (Dexterous Video Imitation), a novel framework that leverages text-conditioned synthetic videos to enable physically plausible dexterous agent control for interacting with unseen target objects. To overcome the imprecision of generative 2D cues, we introduce a hybrid tracking reward that integrates 3D human tracking with robust 2D object tracking. Unlike methods relying on high-quality 3D kinematic demonstrations, DeVI requires only the generated video, enabling zero-shot generalization across diverse objects and interaction types. Extensive experiments demonstrate that DeVI outperforms existing approaches that imitate 3D human-object interaction demonstrations, particularly in modeling dexterous hand-object interactions. We further validate the effectiveness of DeVI in multi-object scenes and text-driven action diversity, confirming the advantage of using video as an HOI-aware motion planner.


Method Overview

Random Image

Our method consists of three parts; (1) 2D HOI Video Generation, (2) Extracting Hybrid Imitation Targets from the Video, and (3) Learning Humanoid Control Policy. First, we generate 2D HOI Video from the rendered 3D scene using the pre-trained image-to-video diffusion model. pipeline. Then, the Hybrid Imitation Targets which includes 3D human reference and 2D object reference is extracted from the video. Using the hybrid imitation targets, we learn humanoid control policy imitating the video via our hybrid tracking reward.

Results


Various Simulated HOIs

Trophy

Camera

Coke

Wok

Straw Hat

Garbage

Pot

Potted Plant



Target Awareness & Text Controllability

"Lifts a pot lid with right hand and places it onto the pot to close it"

"Lifts a frying pan with right hand and places it onto the induction"

"Picks up an apple with left hand and places into a brown basket"

"Picks up a tomato with right hand and places into a brown basket"



Synthetic Imitation Target



DeVI on GRAB dataset

BibTeX

coming soon!