TY - GEN
T1 - Learning Maximum Margin Channel Decoders for Non-linear Gaussian Channels
AU - Tsvieli, Amit
AU - Weinberger, Nir
N1 - Publisher Copyright: © 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - The problem of learning a channel decoder for an unknown non-linear white Gaussian noise channel is considered. The learner is provided with a fixed codebook and a dataset comprised of n independent input-output samples of the channel, and is required to select a matrix for a nearest neighbor decoder with a linear kernel. The objective of maximizing the margin of the decoder is addressed. Accordingly, a regularized loss minimization problem with a codebook-related regularization term and a hinge-like loss function is developed, which is inspired by the support vector machine paradigm for classification problems. Expected generalization error bound for that hinge loss is provided for the solution of the regularized loss minimization, and shown to scale at a rate of O(1/(λn)), where λ is a regularization tradeoff parameter. In addition, a high probability uniform generalization error bound is provided for the hypothesis class, and shown to scale at a rate of O(1/√n). A stochastic sub-gradient descent algorithm for solving the regularized loss minimization problem is proposed, and an optimization error bound is stated, which scales at a rate of Õ(1/(λ T)). The performance of the this algorithm is demonstrated by an example.
AB - The problem of learning a channel decoder for an unknown non-linear white Gaussian noise channel is considered. The learner is provided with a fixed codebook and a dataset comprised of n independent input-output samples of the channel, and is required to select a matrix for a nearest neighbor decoder with a linear kernel. The objective of maximizing the margin of the decoder is addressed. Accordingly, a regularized loss minimization problem with a codebook-related regularization term and a hinge-like loss function is developed, which is inspired by the support vector machine paradigm for classification problems. Expected generalization error bound for that hinge loss is provided for the solution of the regularized loss minimization, and shown to scale at a rate of O(1/(λn)), where λ is a regularization tradeoff parameter. In addition, a high probability uniform generalization error bound is provided for the hypothesis class, and shown to scale at a rate of O(1/√n). A stochastic sub-gradient descent algorithm for solving the regularized loss minimization problem is proposed, and an optimization error bound is stated, which scales at a rate of Õ(1/(λ T)). The performance of the this algorithm is demonstrated by an example.
UR - http://www.scopus.com/inward/record.url?scp=85136266846&partnerID=8YFLogxK
U2 - 10.1109/ISIT50566.2022.9834818
DO - 10.1109/ISIT50566.2022.9834818
M3 - منشور من مؤتمر
T3 - IEEE International Symposium on Information Theory - Proceedings
SP - 2469
EP - 2474
BT - 2022 IEEE International Symposium on Information Theory, ISIT 2022
T2 - 2022 IEEE International Symposium on Information Theory, ISIT 2022
Y2 - 26 June 2022 through 1 July 2022
ER -