Data-centric AI
Search by

Posts

    Topics

      Single-Click 3D Object Annotation on LiDAR Point Clouds

      Published on
      Trung Nguyen, Binh-Son Hua, Duc Thanh Nguyen, Dinh Phung

      We present a simple and effective tool for performing interactive 3D object annotation for 3D object detection on LiDAR point clouds. Our annotation pipeline begins with a pre-labeling stage that infers 3D bounding boxes automatically by using a pre-trained deep neural network. While this stage can largely reduce manual effort for annotators, we found that pre-labeling is often imperfect, e.g., some bounding boxes are missing or inaccurate. In this paper, we propose to enhance the annotation pipeline with an interactive operator that allows users to generate a bounding box for a 3D object missed by the pre-trained model. This user interaction acts in the bird’s eye (BEV), i.e., top-down perspective, of the point cloud, where users can inspect the existing annotation and place additional boxes accordingly. Our interactive operator requires only a single click and the inference is done by an object detector network trained on the BEV space. Experimental results show that, compared with existing annotation tools, our method can boost up the annotation efficiency by conveniently adding missing bounding boxes with more accurate dimensions using only single clicks.

      This video is from the NeurIPS 2021 Data-centric AI workshop proceedings.

      Join the Data-centric AI Movement

      We want to share your Data-centric AI story. Fill out this Google form so we can feature your work!

      Contribute

      Share

      © 2024 Data-centric AI