Abstract
The problem of learning a channel decoder for an
additive noise channel whose noise distribution is nonparametric
is considered. The learner is provided with a fixed codebook and
a dataset comprised of independent samples of the noise, and
is required to select a precision matrix for a nearest neighbor
decoder in terms of the Mahalanobis distance. The objective of
maximizing the margin of the decoder is addressed. Accordingly,
a regularized loss minimization problem with a codebook-related
regularization term and a hinge-like loss function is developed,
which is inspired by the support vector machine paradigm for
classification problems. Expected generalization error bound for
that hinge loss is provided, and shown to scale at a rate of
O(1/(λn)), where λ is a regularization tradeoff parameter. A
theoretical guidance for choosing the training signal-to-noise
ratio is proposed based on this bound. A stochastic sub-gradient
descent algorithm for solving the regularized loss minimization
problem is proposed, and an optimization error bound is stated,
which scales at a rate of O˜(1/(λT)). The performance of the
proposed algorithm is demonstrated through an example.
additive noise channel whose noise distribution is nonparametric
is considered. The learner is provided with a fixed codebook and
a dataset comprised of independent samples of the noise, and
is required to select a precision matrix for a nearest neighbor
decoder in terms of the Mahalanobis distance. The objective of
maximizing the margin of the decoder is addressed. Accordingly,
a regularized loss minimization problem with a codebook-related
regularization term and a hinge-like loss function is developed,
which is inspired by the support vector machine paradigm for
classification problems. Expected generalization error bound for
that hinge loss is provided, and shown to scale at a rate of
O(1/(λn)), where λ is a regularization tradeoff parameter. A
theoretical guidance for choosing the training signal-to-noise
ratio is proposed based on this bound. A stochastic sub-gradient
descent algorithm for solving the regularized loss minimization
problem is proposed, and an optimization error bound is stated,
which scales at a rate of O˜(1/(λT)). The performance of the
proposed algorithm is demonstrated through an example.
Original language | English |
---|---|
Title of host publication | International Zurich Seminar on Information and Communication |
Subtitle of host publication | IZS |
Number of pages | 6 |
DOIs | |
State | Published - 2022 |
Event | International Zurich Seminar on Information and Communication - Zurich Duration: 2 Mar 2022 → 4 Mar 2022 https://www.research-collection.ethz.ch/handle/20.500.11850/534535 |
Conference
Conference | International Zurich Seminar on Information and Communication |
---|---|
Abbreviated title | IZS |
City | Zurich |
Period | 2/03/22 → 4/03/22 |
Internet address |