I am a second-year Ph.D. student at UCSD CSE department, advised by Prof. Hao Su. Before that, I graduated from the CS Department of Tsinghua University in July 2019, where I was advised by Prof. Shi-Min Hu. In general, my research interests include 3D vision and robotics. [CV]
DeepMetaHandles: Learning Deformation Meta-Handles of 3D Meshes with Biharmonic Coordinates
Minghua Liu, Minhyuk Sung, Radomir Mech, Hao Su
We presented DeepMetaHandles, a 3D conditional generative model based on mesh deformation. Our method takes automatically-generated control points with biharmonic coordinates as deformation handles, and learns a latent space of deformation for each input mesh. Each axis of the space is explicitly associated with multiple deformation handles, and it's thus called a meta-handle. The disentangled meta-handles factorize all the plausible deformations of the shape, while each of them conforms to an intuitive deformation. We learn the meta-handles unsupervisely by incorporating a target-driven deformation module. We also employ a differentiable render and a 2D discriminator to enhance the plausibility of the deformation.
Meshing Point Clouds with Predicted Intrinsic-Extrinsic Ratio Guidance
Minghua Liu, Xiaoshuai Zhang, Hao Su
We propose a mesh reconstruction method that leverage the input point cloud as much as possible, by only adding connectivity information to existing points. Particularly, we predict which triplets of points should form faces. Our key innovation is a surrogate of local connectivity, calculated by comparing the intrinsic/extrinsic metrics. We learn to predict this surrogate using a deep point cloud network and then feed it to an efficient post-processing module for high-quality mesh generation. We demonstrate that our method can not only preserve details, handle ambiguous structures, but also possess strong generalizability to unseen categories by experiments on synthetic and real data.
SAPIEN: A SimulAted Part-based Interactive ENvironment
Fanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu, Minghua Liu, Hanxiao Jiang, Yifu Yuan, He Wang, Li Yi, Angel Chang, Leonidas Guibas, Hao Su
SAPIEN, is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects. It enables various robotic vision and interaction tasks that require detailed part-level understanding.
Morphing and Sampling Network for Dense Point Cloud Completion
Minghua Liu, Lu Sheng, Sheng Yang, Jing Shao, Shi-Min Hu
For acquiring high-fidelity dense point clouds and avoiding uneven distribution, blurred details, or structural loss of existing methods’ results, we propose a novel approach to complete the partial point cloud in two stages. Specifically, in the first stage, the approach predicts a complete but coarse-grained point cloud with a collection of parametric surface elements. Then, in the second stage, it merges the coarse-grained prediction with the input point cloud by a novel sampling algorithm, and then learns a point-wise residual for the combination. Our method utilizes a joint loss function to guide the distribution of the points.
Task and Path Planning for Multi-Agent Pickup and Delivery
Minghua Liu, Hang Ma, Jiaoyang Li, Sven Koenig
We study the Multi-Agent Pickup-and-Delivery (MAPD) problem, where a team of agents has to execute a batch of tasks in a known environment. To execute a task, an agent has to move first from its current location to the pickup location of the task and then to the delivery location of the task. The MAPD problem is to assign tasks to agents and plan collision-free paths for them to execute their tasks. Online MAPD algorithms can be applied to the offline MAPD problem, but do not utilize all of the available information and may thus not be effective. Therefore, we present two novel offline MAPD algorithms.
HeteroFusion: Dense Scene Reconstruction Integrating Multi-sensors
Sheng Yang, Beichen Li, Minghua Liu, Yu-Kun Lai, Leif Kobbelt, Shi-Min Hu
We present a real-time approach that integrates multiple sensors for dense reconstruction of 3D indoor scenes. Existing algorithms are mainly based on a single RGBD camera and require continuous scanning on areas with sufficient geometric details. Failing to do so can lead to tracking loss. We incorporate multiple types of sensors, which are prevalently equipped in modern robots, including a 2D range sensor, an IMU, and wheel encoders to reinforce the tracking process and obtain better mesh construction.
Saliency-Aware Real-Time Volumetric Fusion for Object Reconstruction
Sheng Yang, Kang Chen, Minghua Liu, Hongbo Fu and Shi-Min Hu
Pacific Graphics 2017 [PDF]
We present a real-time approach for acquiring 3D objects with high fidelity using hand-held consumer-level RGB-D scanning devices. Existing real-time reconstruction methods might fail to produce clean reconstruction results of desired objects due to distracting objects or backgrounds. To address these issues, we incorporate visual saliency into a traditional real-time volumetric fusion pipeline. Salient regions detected from RGB-D frames suggest user-intended objects, and by understanding user intentions, our approach can put more emphasis on important targets, and meanwhile, eliminate disturbance of non-important objects.
I have a Labrador puppy named Jojo. She is growing so fast!