Abstract
Extracting meaningful representations from geometric data has prime importance in the areas of computer vision, computer graphics, and image processing. Classical approaches use tools from differential geometry for modeling the problem and employed efficient and robust numerical techniques to engineer them for a particular application. Recent advances in learning methods, particularly in the areas of deep-learning and neural networks provide an alternative mechanism of extracting meaningful features and doing data-engineering. These techniques have proven very successful for various kinds of visual and semantic cognition tasks achieving state-of-the art results. In this chapter we explore the synergy between these two seemingly disparate computational methodologies. First, we provide a short treatise on geometric invariants of planar curves and a scheme to discover them from data in a learning framework, where the invariants are modelled using neural networks. Secondly, we also demonstrate the reverse, that is, imputing principled geometric invariants like geometric moments into standard learning architectures enables a significant boost in performance. Our goal would not only be to achieve better performance, but also to provide a geometric insight into the learning process thereby establishing strong links between the two fields.
Original language | English |
---|---|
Title of host publication | Handbook of Variational Methods for Nonlinear Geometric Data |
Pages | 443-461 |
Number of pages | 19 |
ISBN (Electronic) | 9783030313517 |
State | Published - 3 Apr 2020 |
All Science Journal Classification (ASJC) codes
- General Computer Science
- General Mathematics