PU-Net: Point Cloud Upsampling Network

Lequan Yu*1,3
Xianzhi Li*1
Chi-Wing Fu1,3
Daniel Cohen-Or2
Pheng-Ann Heng1,3
1The Chinese University of Hong Kong
2Tel Aviv University
3Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China

Code [GitHub]
CVPR 2018 [Paper]




Abstract

Learning and analyzing 3D point clouds with deep networks is challenging due to the sparseness and irregularity of the data. In this paper, we present a data-driven point cloud upsampling technique. The key idea is to learn multilevel features per point and expand the point set via a multibranch convolution unit implicitly in feature space. The expanded feature is then split to a multitude of features, which are then reconstructed to an upsampled point set. Our network is applied at a patch-level, with a joint loss function that encourages the upsampled points to remain on the underlying surface with a uniform distribution. We conduct various experiments using synthesis and scan data to evaluate our method and demonstrate its superiority over some baseline methods and an optimization-based method. Results show that our upsampled points have better uniformity and are located closer to the underlying surfaces.


Overview

Architecture of PU-Net.


Architecture of Feature Expansion.


[GitHub]


Paper and Supplementary Material

Lequan Yu, Xianzhi Li, Chi-Wing Fu,
Daniel Cohen-Or, Pheng-Ann Heng.

EC-Net: an Edge-aware Point set Consolidation Network.
In CVPR, 2018.

[Paper] [supp] [poster]



Performance comparisons

As far as we know, we are not aware of any deep learning-based method for point cloud upsampling, so we design some baseline methods for comparison. Since PointNet and PointNet++ are pioneers for 3D point cloud reasoning with deep learning techniques, we design the baselines based on them.

We formulate two metrics to measure the deviation between the output points and the ground truth meshes, as well as the distribution uniformity of the output points. We design the normalized uniformity coefficient (NUC) to evaluate the uniformity of point clouds. Please refer paper for more details.

Visual comparison of deviation. The colors on points reveal the surface distance errors.

Surface reconstruction results from the upsampled point clouds.

Quantitative comparison on testing dataset.



More Experiments

We also apply our method to point clouds produced from real scans downloaded from Aim@Shape and obtained from the EAR project. Real scan point clouds are often noisy and have inhomogeneous point distribution. Comparing with the input point clouds, our method is still able to generate more points near the edges and on the surface, while better preserving the sharp features.

Results of iterative upsampling.

Surface reconstruction results from noisy input points.

Results on real-scanned point clouds.




Acknowledgements

We thank anonymous reviewers for the comments and suggestions. The work is supported in part by the National Basic Program of China, the 973 Program (Project No. 2015CB351706), the Research Grants Council of the Hong Kong Special Administrative Region (Project no. CUHK 14225616), the Shenzhen Science and Technology Program (No. JCYJ20170413162617606), and the CUHK strategic recruitment fund.