Data-centric AI
Search by

Posts

    Topics

      Utilizing Driving Context to Increase the Annotation Efficiency of Imbalanced Gaze Image Data

      Published on
      Johannes Rehm, Odd Erik Gundersen, Kerstin Bach, Irina Reshodko

      Knowing where the driver of a car is looking, whether in a mirror or through the windshield, is important for advanced driver assistance systems and driving education applications. This problem can be addressed as a supervised classification task. However, in a typical dataset of driver video recordings, some classes will dominate over others. We implemented a driving video annotation tool (DVAT) that uses automatically recognized driving situations to focus the human annotator’s effort on snippets with a high likelihood of otherwise rarely occurring classes. By using DVAT, we reduced the number of frames that need human input by 94% while keeping the dataset more balanced and using human time efficiently.

      This video is from the NeurIPS 2021 Data-centric AI workshop proceedings.

      Join the Data-centric AI Movement

      We want to share your Data-centric AI story. Fill out this Google form so we can feature your work!

      Contribute

      Share

      © 2022 Data-centric AI