Recent years have witnessed a growing interest in understanding the semantics of point clouds in a wide variety of applications. However, point cloud labeling remains an open problem, due to the difﬁculty in acquiring sufﬁcient 3D point labels towards training effective classiﬁers. In this paper, we overcome this challenge by utilizing the existing massive 2D semantic labeled datasets from decadelong community efforts, such as ImageNet and LabelMe,and a novel “cross-domain” label propagation approach.
Our proposed method consists of two major novel components, Exemplar SVM based label propagation, which effectively addresses the cross-domain issue, and a graphical model based contextual reﬁnement incorporating 3D constraints. Most importantly, the entire process does not require any training data from the target scenes, also with good scalability towards large scale applications. We evaluate our approach on the well-known Cornell Point Cloud Dataset, achieving much greater efﬁciency and comparable accuracy even without any 3D training data. Our approach shows further major gains in accuracy when the training
data from the target scenes is used, outperforming state-ofthe-art approaches with far better efﬁciency. Read More