TY - GEN
T1 - An Entropy Maximization Approach to Optimal Dimensionality Reduction
AU - Dotan, Aviv
AU - Shriki, Oren
N1 - Publisher Copyright: © 2018 IEEE.
PY - 2018/10/10
Y1 - 2018/10/10
N2 - The maximum entropy principle is a well established approach to unsupervised optimization. Entropy maximization learning algorithms for single-layered neural networks already exist for the cases in which the number of output neurons is greater or equal to the number of input neurons. These models were successfully employed in various applications, most notably for independent component analysis. In this work, we generalize the maximum entropy principle to a single-layered neural network with fewer output than input neurons. The proposed learning algorithm finds a low-dimensional representation of the data and identifies the independent components within it. In general, such a model must incorporate some prior knowledge of the input distribution; however, we overcome this difficulty using a variational approach. We illustrate the performance of the model through several examples and compare it to other algorithms. While our model achieves similar results to the state-of-the-art algorithm for overdetermined independent component analysis within a similar convergence time, its main advantage lies in its ability to be learned efficiently on-line.
AB - The maximum entropy principle is a well established approach to unsupervised optimization. Entropy maximization learning algorithms for single-layered neural networks already exist for the cases in which the number of output neurons is greater or equal to the number of input neurons. These models were successfully employed in various applications, most notably for independent component analysis. In this work, we generalize the maximum entropy principle to a single-layered neural network with fewer output than input neurons. The proposed learning algorithm finds a low-dimensional representation of the data and identifies the independent components within it. In general, such a model must incorporate some prior knowledge of the input distribution; however, we overcome this difficulty using a variational approach. We illustrate the performance of the model through several examples and compare it to other algorithms. While our model achieves similar results to the state-of-the-art algorithm for overdetermined independent component analysis within a similar convergence time, its main advantage lies in its ability to be learned efficiently on-line.
UR - http://www.scopus.com/inward/record.url?scp=85056496976&partnerID=8YFLogxK
U2 - https://doi.org/10.1109/IJCNN.2018.8489575
DO - https://doi.org/10.1109/IJCNN.2018.8489575
M3 - Conference contribution
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2018 International Joint Conference on Neural Networks, IJCNN 2018 - Proceedings
T2 - 2018 International Joint Conference on Neural Networks, IJCNN 2018
Y2 - 8 July 2018 through 13 July 2018
ER -