GALA: Generating Animatable Layered Assets
from a Single Scan

1Seoul National University, 2Codec Avatars Lab, Meta
*Indicates Equal Contribution

CVPR 2024
Random Image

Given a single-layer 3D mesh of a clothed human (left), our approach enables Generation of Animatable Layered Assets for 3D garment transfer and avatar customization in any poses by decomposing and inpainting the geometry and texture of each layer with a pretrained 2D diffusion model in a canonical space.

Abstract

We present GALA, a framework that takes as input a single-layer clothed 3D human mesh and decomposes it into complete multi-layered 3D assets. The outputs can then be combined with other assets to create novel clothed human avatars with any pose. Existing reconstruction approaches often treat clothed humans as a single-layer of geometry and overlook the inherent compositionality of humans with hairstyles, clothing, and accessories, thereby limiting the utility of the meshes for downstream applications. Decomposing a single-layer mesh into separate layers is a challenging task because it requires the synthesis of plausible geometry and texture for the severely occluded regions. Moreover, even with successful decomposition, meshes are not normalized in terms of poses and body shapes, failing coherent composition with novel identities and poses. To address these challenges, we propose to leverage the general knowledge of a pretrained 2D diffusion model as geometry and appearance prior for humans and other assets. We first separate the input mesh using the 3D surface segmentation extracted from multi-view 2D segmentations. Then we synthesize the missing geometry of different layers in both posed and canonical spaces using a novel pose-guided Score Distillation Sampling (SDS) loss. Once we complete inpainting high-fidelity 3D geometry, we also apply the same SDS loss to its texture to obtain the complete appearance including the initially occluded regions. Through a series of decomposition steps, we obtain multiple layers of 3D assets in a shared canonical space normalized in terms of poses and human shapes, hence supporting effortless composition to novel identities and reanimation with novel poses. Our experiments demonstrate the effectiveness of our approach for decomposition, canonicalization, and composition tasks compared to existing solutions.


Example Outputs

Decomposition and Canonicalization

Layered Decomposition and Composition

Decomposition of User Generated Assets

Generated Assets

Method

Random Image

GALA learns an object and the remaining human layers in separate canonical spaces. The canonical space, represented in orange, and the original posed space, represented in purple, are differentiably associated using linear blend skinning (LBS). Our innovative pose-guided SDS loss (on the right) facilitates the decomposition and inpainting in both the canonical and posed spaces. Additionally, we preserve the accuracy of visible regions through a reconstruction and segmentation loss (bottom-left).

Video

BibTeX

@misc{kim2024gala,
        title={GALA: Generating Animatable Layered Assets from a Single Scan}, 
        author={Taeksoo Kim and Byungjun Kim and Shunsuke Saito and Hanbyul Joo},
        year={2024},
        eprint={2401.12979},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
  }