Using diffusion map for visual navigation of a ground robot

Oleg Kupervasser, Hennadii Kutomanov, Michael Mushaelov, Roman Yavich

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents the visual navigation method for determining the position and orientation of a ground robot using a diffusion map of robot images (obtained from a camera in an upper position—e.g., tower, drone) and for investigating robot stability with respect to desirable paths and control with time delay. The time delay appears because of image processing for visual navigation. We consider a diffusion map as a possible alternative to the currently popular deep learning, comparing the possibilities of these two methods for visual navigation of ground robots. The diffusion map projects an image (described by a point in multidimensional space) to a low-dimensional manifold preserving the mutual relationships between the data. We find the ground robot’s position and orientation as a function of coordinates of the robot image on the low-dimensional manifold obtained from the diffusion map. We compare these coordinates with coordinates obtained from deep learning. The algorithm has higher accuracy and is not sensitive to changes in lighting, the appearance of external moving objects, and other phenomena. However, the diffusion map needs a larger calculation time than deep learning. We consider possible future steps for reducing this calculation time.

Original languageEnglish
Article number2175
Pages (from-to)1-16
Number of pages16
JournalMathematics
Volume8
Issue number12
DOIs
StatePublished - Dec 2020

Keywords

  • Airborne control
  • Artificial neural network
  • Autopilot
  • Deep learning convolution network
  • Diffusion map
  • Ground robots
  • Prototype
  • Stability of differential equations
  • Tethered platform
  • Time delay
  • Vision-based navigation
  • Visual navigation

All Science Journal Classification (ASJC) codes

  • General Mathematics

Fingerprint

Dive into the research topics of 'Using diffusion map for visual navigation of a ground robot'. Together they form a unique fingerprint.

Cite this