TY - GEN
T1 - Enhancing AUV 3D Obstacle Avoidance
T2 - OCEANS 2024 - Halifax, OCEANS 2024
AU - Gutnik, Yevgeni
AU - Fabian, Izhak
AU - Zagdanski, Nir
AU - Gal, Oren
AU - Treibitz, Tali
AU - Groper, Morel
N1 - Publisher Copyright: © 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Autonomous underwater vehicles (AUVs) are typically programmed to follow routes based on predefined waypoints and depth profiles. However, in complex and unpredictable environments-such as coral reefs, offshore structures, or ship-wrecks-AUVs can encounter unexpected obstacles that pose risks to both the vehicle and its surroundings. To navigate and avoid unexpected obstacles, AUVs operating in such environments are often equipped with forward-looking sonars (FLS). However, standard FLS sensors are typically limited in resolution and can only provide 2D information on bearing and range, restricting their effectiveness in facilitating navigation in complex environments. Vision cameras, on the other hand, offer high-resolution data with bearing and elevation information, but when using a single-camera setup, they cannot reliably provide distance information. This study introduces a comprehensive framework for the fusion of forward-looking camera (FLC) and FLS data, using a projection of FLS data into the FLC frame and incorporating data from a trained self-supervised network.
AB - Autonomous underwater vehicles (AUVs) are typically programmed to follow routes based on predefined waypoints and depth profiles. However, in complex and unpredictable environments-such as coral reefs, offshore structures, or ship-wrecks-AUVs can encounter unexpected obstacles that pose risks to both the vehicle and its surroundings. To navigate and avoid unexpected obstacles, AUVs operating in such environments are often equipped with forward-looking sonars (FLS). However, standard FLS sensors are typically limited in resolution and can only provide 2D information on bearing and range, restricting their effectiveness in facilitating navigation in complex environments. Vision cameras, on the other hand, offer high-resolution data with bearing and elevation information, but when using a single-camera setup, they cannot reliably provide distance information. This study introduces a comprehensive framework for the fusion of forward-looking camera (FLC) and FLS data, using a projection of FLS data into the FLC frame and incorporating data from a trained self-supervised network.
KW - Autonomous underwater vehicles (AUVs)
KW - forward-looking sonar
KW - obstacle avoidance
KW - sensor fusion
KW - underwater image processing
KW - underwater navigation
UR - http://www.scopus.com/inward/record.url?scp=85212444421&partnerID=8YFLogxK
U2 - https://doi.org/10.1109/OCEANS55160.2024.10753707
DO - https://doi.org/10.1109/OCEANS55160.2024.10753707
M3 - Conference contribution
T3 - Oceans Conference Record (IEEE)
BT - OCEANS 2024 - Halifax, OCEANS 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 23 September 2024 through 26 September 2024
ER -