I'm focusing on how large generative models can serve as priors for faithfully reconstructing the world by capturing its underlying intrinsic properties.
My work is supported in part by an NSF Graduate Research Fellowship.
I am always open to talking about research ideas or potential collaborations. Feel free to email me.
An efficient video inpainting control framework that focuses computation only where it is needed, achieving 10x compute efficiency over state-of-the-art generative editing methods while improving editing quality.
A compact, coarse-to-fine neural material model using texture-space shading and spatiotemporal computation amortization, achieving over 90 FPS on Meta Quest 3 with visual quality comparable to NeuMIP.
Reconstruct input images in a 3D occupancy grid and project to bird's eye view of the environment,
register that to satelite imagery data to correct SLAM drift in real time without requiring GPS.