TY - GEN
T1 - Learning filter functions in regularisers by minimising quotients
AU - Benning, Martin
AU - Gilboa, Guy
AU - Grah, Joana Sarah
AU - Schönlieb, Carola Bibiane
N1 - Publisher Copyright: © Springer International Publishing AG 2017.
PY - 2017
Y1 - 2017
N2 - Learning approaches have recently become very popular in the field of inverse problems. A large variety of methods has been established in recent years, ranging from bi-level learning to high-dimensional machine learning techniques. Most learning approaches, however, only aim at fitting parametrised models to favourable training data whilst ignoring misfit training data completely. In this paper, we follow up on the idea of learning parametrised regularisation functions by quotient minimisation as established in [3]. We extend the model therein to include higher-dimensional filter functions to be learned and allow for fit- and misfit-training data consisting of multiple functions. We first present results resembling behaviour of well-established derivative-based sparse regularisers like total variation or higher-order total variation in one-dimension. Our second and main contribution is the introduction of novel families of non-derivative-based regularisers. This is accomplished by learning favourable scales and geometric properties while at the same time avoiding unfavourable ones.
AB - Learning approaches have recently become very popular in the field of inverse problems. A large variety of methods has been established in recent years, ranging from bi-level learning to high-dimensional machine learning techniques. Most learning approaches, however, only aim at fitting parametrised models to favourable training data whilst ignoring misfit training data completely. In this paper, we follow up on the idea of learning parametrised regularisation functions by quotient minimisation as established in [3]. We extend the model therein to include higher-dimensional filter functions to be learned and allow for fit- and misfit-training data consisting of multiple functions. We first present results resembling behaviour of well-established derivative-based sparse regularisers like total variation or higher-order total variation in one-dimension. Our second and main contribution is the introduction of novel families of non-derivative-based regularisers. This is accomplished by learning favourable scales and geometric properties while at the same time avoiding unfavourable ones.
KW - Generalised inverse power method
KW - Non-linear eigenproblem
KW - Regularisation learning
KW - Sparse regularization
UR - http://www.scopus.com/inward/record.url?scp=85019700639&partnerID=8YFLogxK
U2 - https://doi.org/10.1007/978-3-319-58771-4_41
DO - https://doi.org/10.1007/978-3-319-58771-4_41
M3 - منشور من مؤتمر
SN - 9783319587707
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 511
EP - 523
BT - Scale Space and Variational Methods in Computer Vision - 6th International Conference, SSVM 2017, Proceedings
A2 - Lauze, Francois
A2 - Dong, Yiqiu
A2 - Dahl, Anders Bjorholm
T2 - 6th International Conference on Scale Space and Variational Methods in Computer Vision, SSVM 2017
Y2 - 4 June 2017 through 8 June 2017
ER -