Autonomous driving has recently gained lots of attention due to its disruptive potential and impact on the global economy; however, these high expectations are hindered by strict safety requirements for redundant sensing modalities that are each able to independently perform complex tasks to ensure reliable operation. At the core of an autonomous driving algorithmic stack is road segmentation, which is the basis for numerous planning and decision-making algorithms. Radar-based methods fail in many driving scenarios, mainly as various common road delimiters barely reflect radar signals, coupled with a lack of analytical models for road delimiters and the inherit limitations in radar angular resolution. Our approach is based on radar data in the form of a two-dimensional complex range-Doppler array as input into a deep neural network (DNN) that is trained to semantically segment the drivable area using weak supervision from a camera. Furthermore, guided back propagation was utilized to analyse radar data and design a novel perception filter. Our approach creates the ability to perform road segmentation in common driving scenarios based solely on radar data and we propose to utilize this method as an enabler for redundant sensing modalities for autonomous driving.
All Science Journal Classification (ASJC) codes
- Human-Computer Interaction
- Computer Vision and Pattern Recognition
- Computer Networks and Communications
- Artificial Intelligence