- Yehonathan Litman1
- Or Patashnik2
- Kangle Deng1
- Aviral Agrawal1
- Rushikesh Zawar1
- Fernando De la Torre1
- Shubham Tulsiani1
- 1Carnegie Mellon University
- 2Tel Aviv University
Abstract
Recent works in inverse rendering have shown promise in using multi-view images of an object to recover shape, albedo, and materials.
However, the recovered components often fail to render accurately under new lighting conditions due to the intrinsic challenge of
disentangling albedo and material properties from input images. To address this challenge, we introduce MaterialFusion, an enhanced
conventional 3D inverse rendering pipeline that incorporates a 2D prior on texture and material properties. We present StableMaterial,
a 2D diffusion model prior that refines multi-lit data to estimate the most likely albedo and material from given input appearances.
This model is trained on albedo, material, and relit image data derived from a curated dataset of approximately ~12K artist-designed
synthetic Blender objects called BlenderVault. we incorporate this diffusion prior with an inverse rendering framework where we use
score distillation sampling (SDS) to guide the optimization of the albedo and materials, improving relighting performance in comparison
with previous work. We validate MaterialFusion's relighting performance on 4 datasets of synthetic and real objects under diverse
illumination conditions, showing our diffusion-aided approach significantly improves the appearance of reconstructed objects under
novel lighting conditions.
Intrinsic Decomposition & Novel View Synthesis
All objects were captured under a single fixed and unknown environment lighting as a set of images and corresponding camera poses.
Scene
Result
Relighting under Novel Illuminations
Scene
BlenderVault Dataset
To train our material diffusion prior, we utilize BlenderVault, a curated dataset containing 11,709 synthetic Blender objects designed by artists.
The objects are diverse in nature and contain high quality property assets that are extracted and used to generate training data. It is available for download here.
Citation
@article{litman2024materialfusion,
author = {Yehonathan Litman and Or Patashnik and Kangle Deng and Aviral Agrawal and Rushikesh Zawar and Fernando De la Torre and Shubham Tulsiani},
title = {MaterialFusion: Enhancing Inverse Rendering with Material Diffusion Priors},
journal = {ArXiv},
year = {2024}
}
Acknowledgements
This work was supported in part by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE1745016.
Code for this website was borrowed from TensoIR.