Abstract
This paper presents a vision-based, computationally efficient method for simultaneous robot motion estimation and dynamic target tracking while operating in GPS-denied unknown or uncertain environments. While numerous vision-based approaches are able to achieve simultaneous ego-motion estimation along with detection and tracking of moving objects, many of them require performing a bundle adjustment optimization, which involves the estimation of the 3D points observed in the process. One of the main concerns in robotics applications is the computational effort required to sustain extended operation. Considering applications for which the primary interest is highly accurate online navigation rather than mapping, the number of involved variables can be considerably reduced by avoiding the explicit 3D structure reconstruction and consequently save processing time. We take advantage of the light bundle adjustment method, which allows for ego-motion calculation without the need for 3D points online reconstruction, and thus, to significantly reduce computational time compared to bundle adjustment. The proposed method integrates the target tracking problem into the light bundle adjustment framework, yielding a simultaneous ego-motion estimation and tracking process, in which the target is the only explicitly online reconstructed 3D point. Our approach is compared to bundle adjustment with target tracking in terms of accuracy and computational complexity, using simulated aerial scenarios and real-imagery experiments.
Original language | English |
---|---|
Pages (from-to) | 157-170 |
Number of pages | 14 |
Journal | International Journal of Micro Air Vehicles |
Volume | 10 |
Issue number | 2 |
DOIs | |
State | Published - 1 Jun 2018 |
Externally published | Yes |
Keywords
- Simultaneous localization and mapping
- bundle adjustment
- computer vision
- navigation
- target tracking
All Science Journal Classification (ASJC) codes
- Aerospace Engineering