Variations on the Convolutional Sparse Coding Model

Ives Rey-Otero, Jeremias Sulam, Michael Elad

Research output: Contribution to journalArticlepeer-review

Abstract

Over the past decade, the celebrated sparse representation model has achieved impressive results in various signal and image processing tasks. A convolutional version of this model, termed convolutional sparse coding (CSC), has been recently reintroduced and extensively studied. CSC brings a natural remedy to the limitation of typical sparse enforcing approaches of handling global and high-dimensional signals by local, patch-based, processing. While the classic field of sparse representations has been able to cater for the diverse challenges of different signal processing tasks by considering a wide range of problem formulations, almost all available algorithms that deploy the CSC model consider the same ℓ1 - ℓ2 problem form. As we argue in this paper, this CSC pursuit formulation is also too restrictive as it fails to explicitly exploit some local characteristics of the signal. This work expands the range of formulations for the CSC model by proposing two convex alternatives that merge global norms with local penalties and constraints. The main contribution of this work is the derivation of efficient and provably converging algorithms to solve these new sparse coding formulations.

Original languageEnglish
Article number8950415
Pages (from-to)519-528
Number of pages10
JournalIEEE Transactions on Signal Processing
Volume68
DOIs
StatePublished - 2020

Keywords

  • Sparse representation
  • convex optimization
  • convolutional sparse coding
  • parallel proximal algorithm

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Variations on the Convolutional Sparse Coding Model'. Together they form a unique fingerprint.

Cite this