Utilizing Driving Context to Increase the Annotation Efficiency of Imbalanced Gaze Image Data
Johannes Rehm, Odd Erik Gundersen, Kerstin Bach, Irina Reshodko
Knowing where the driver of a car is looking, whether in a mirror or through the windshield, is important for advanced driver assistance systems and driving education applications. This problem can be addressed as a supervised classification task. However, in a typical dataset of driver video recordings, some classes will dominate over others. We implemented a driving video annotation tool (DVAT) that uses automatically recognized driving situations to focus the human annotator’s effort on snippets with a high likelihood of otherwise rarely occurring classes. By using DVAT, we reduced the number of frames that need human input by 94% while keeping the dataset more balanced and using human time efficiently.