Semantic Segmentation of Large-Scale Point Clouds by Encoder-Decoder Shared MLPs with Weighted Focal Loss

2022 IEEE 21st International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)(2022)

引用 0|浏览2
暂无评分
摘要
It is essential to do the semantic segmentation task on large-scale outdoor point clouds under the demand of autonomous driving and other applications. Because there is a serious imbalance among different semantic classes, it become a challenging problem. In this paper, we propose a point-based encoder-decoder shared multi- layer perceptrons (MLPs) network with weighted focal loss for semantic segmentation of large-scale point clouds. In the proposed network, we design a residual encoding block which is composed of a relative position encoding block and two neighbor features gathering and combined pooling blocks to aggregate rich neighboring points information. To alleviate the categories imbalance problem, we adopt class-balanced sampler to get the input point cloud block per iteration and use weighted focal loss in the training process. We conducted the experiments on Toronto-3D dataset and the results show that our method achieved an overall accuracy (OA) with 95.70%, a mean intersection over union (mIoU) with 71.85% when input data only contains coordinate information, and an OA with 97.81 %, a mIoU with 81.16% when input data contains both of coordinate and color information.
更多
查看译文
关键词
Large-scale point clouds,Semantic Segmentation,Deep Learning,Imbalance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要