Resumen
In point-cloud scenes, semantic segmentation is the basis for achieving an understanding of a 3D scene. The disorderly and irregular nature of 3D point clouds makes it impossible for traditional convolutional neural networks to be applied directly, and most deep learning point-cloud models often suffer from an inadequate utilization of spatial information and of other related point-cloud features. Therefore, to facilitate the capture of spatial point neighborhood information and obtain better performance in point-cloud analysis for point-cloud semantic segmentation, a multiscale, multi-feature PointNet (MSMF-PointNet) deep learning point-cloud model is proposed in this paper. MSMF-PointNet is based on the classical point-cloud model PointNet, and two small feature-extraction networks called Mini-PointNets are added to operate in parallel with the modified PointNet; these additional networks extract multiscale, multi-neighborhood features for classification. In this paper, we use the spherical neighborhood method to obtain the local neighborhood features of the point cloud, and then we adjust the radius of the spherical neighborhood to obtain the multiscale point-cloud features. The obtained multiscale neighborhood feature point set is used as the input of the network. In this paper, a cross-sectional comparison analysis is conducted on the Vaihingen urban test dataset from the single-scale and single-feature perspectives.