Education Background
|
|
Master of Philosophy (M.Phil.)
University of Science and Technology of China (USTC), 2023 - Present
4D scene representation | reconstruction | editing
|
|
Bachelor of Engineering (Honors)
Huazhong University of Science and Technology (HUST), 2019 - 2023
Some Robotics competition (etc., RoboMaster)
|
|
High School Diploma
The High School Attached to Hunan Normal University, Changsha, 2016 - 2019
China High School Biology Olympiad | wonderful time...
|
Research
My research interests lie in neural rendering technologies, specifically focusing on 4D scene representation, reconstruction, and editing.
* Equal contribution.
|
|
MotionGS: Exploring Explicit Motion Guidance for Deformable 3D Gaussian Splatting
Ruijie Zhu*,
Yanzhe Liang*,
Hanzhi Chang,
Jiacheng Deng,
Jiahao Lu,
Wenfei Yang,
Tianzhu Zhang,
Yongdong Zhang,
NeurIPS, 2024
[Paper]
[Page]
Dynamic scene reconstruction is a long-term challenge in the field of 3D vision. Recently, the emergence of 3D Gaussian Splatting has provided new insights into this problem. Although subsequent efforts rapidly extend static 3D Gaussian to dynamic scenes, they often lack explicit constraints on object motion, leading to optimization difficulties and performance degradation. To address the above issues, we propose a novel deformable 3D Gaussian splatting framework called MotionGS, which explores explicit motion priors to guide the deformation of 3D Gaussians. Specifically, we first introduce an optical flow decoupling module that decouples optical flow into camera flow and motion flow, corresponding to camera movement and object motion respectively. Then the motion flow can effectively constrain the deformation of 3D Gaussians, thus simulating the motion of dynamic objects. Additionally, a camera pose refinement module is proposed to alternately optimize 3D Gaussians and camera poses, mitigating the impact of inaccurate camera poses. Extensive experiments in the monocular dynamic scenes validate that MotionGS surpasses state-of-the-art methods and exhibits significant superiority in both qualitative and quantitative results.
|
|
DN-4DGS: Denoised Deformable Network with Temporal-Spatial Aggregation for Dynamic Scene Rendering
Jiahao Lu,
Jiacheng Deng,
Ruijie Zhu,
Yanzhe Liang,
Wenfei Yang,
Tianzhu Zhang,
Xu Zhou,
NeurIPS, 2024
[Paper]
[Page]
Dynamic scenes rendering is an intriguing yet challenging problem. Although current methods based on NeRF have achieved satisfactory performance, they still can not reach real-time levels. Recently, 3D Gaussian Splatting (3DGS) has garnered researchers attention due to their outstanding rendering quality and real-time speed. Therefore, a new paradigm has been proposed: defining a canonical 3D gaussians and deforming it to individual frames in deformable fields. However, since the coordinates of canonical 3D gaussians are filled with noise, which can transfer noise into the deformable fields, and there is currently no method that adequately considers the aggregation of 4D information. Therefore, we propose Denoised Deformable Network with Temporal-Spatial Aggregation for Dynamic Scene Rendering (DN-4DGS). Specifically, a Noise Suppression Strategy is introduced to change the distribution of the coordinates of the canonical 3D gaussians and suppress noise. Additionally, a Decoupled Temporal-Spatial Aggregation Module is designed to aggregate information from adjacent points and frames. Extensive experiments on various real-world datasets demonstrate that our method achieves state-of-the-art rendering quality under a real-time level.
|
Internships
|
|
AI Algorithm Research Intern
Lenovo, Shanghai
12/2023 - 11/2024
|
|
Remote Research Project Program, IEG
Tencent, Shenzhen
1/2024 - 6/2024
|
|
Computer Vision Intern
Insta360, Shenzhen
6/2023 - 9/2023
|
Projects & Competitions
- [DJI RoboMaster] The Second Prize of the National University Robotics Competition
- The First Prize of the National University Students’Opt-Sci-Tech Competition
|
Template is adapted from
Here
Last updated: Nov. 2024
|
|