TY - GEN
T1 - K-vectors
T2 - 57th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2019
AU - Weinberger, Nir
AU - Feder, Meir
N1 - Publisher Copyright: © 2019 IEEE.
PY - 2019/9
Y1 - 2019/9
N2 - The k-vectors algorithm for learning regression functions proposed here is akin to the well-known k-means algorithm. Both algorithms partition the space of 'features', but in contrast to the k-means algorithm, the k-vectors algorithm aims to reconstruct the regression function of the features (response rather than the features themselves. The partitioning rule of the algorithm is based on maximizing the correlation (inner product) of the feature vector x with a set of k vectors, and generates polyhedral cells, similar to the ones generated by the nearest-neighbor rule of the k-means algorithm. Similarly to k-means, the learning algorithm alternates between two types of steps. In the first type of steps, k labels are determined via a centroid-type rule (in the response space), and in the second type of steps, the k vectors which determine the partition are updated according to a multiclass classification rule, in the spirit of support vector machines. It is proved that both steps of the algorithm only require solving convex optimization problems, and that the algorithm is empirically consistent-as the length of the training sequence increases, fixed-points of the empirical algorithm tend to fixed points of the population algorithm.
AB - The k-vectors algorithm for learning regression functions proposed here is akin to the well-known k-means algorithm. Both algorithms partition the space of 'features', but in contrast to the k-means algorithm, the k-vectors algorithm aims to reconstruct the regression function of the features (response rather than the features themselves. The partitioning rule of the algorithm is based on maximizing the correlation (inner product) of the feature vector x with a set of k vectors, and generates polyhedral cells, similar to the ones generated by the nearest-neighbor rule of the k-means algorithm. Similarly to k-means, the learning algorithm alternates between two types of steps. In the first type of steps, k labels are determined via a centroid-type rule (in the response space), and in the second type of steps, the k vectors which determine the partition are updated according to a multiclass classification rule, in the spirit of support vector machines. It is proved that both steps of the algorithm only require solving convex optimization problems, and that the algorithm is empirically consistent-as the length of the training sequence increases, fixed-points of the empirical algorithm tend to fixed points of the population algorithm.
UR - http://www.scopus.com/inward/record.url?scp=85077789543&partnerID=8YFLogxK
U2 - https://doi.org/10.1109/ALLERTON.2019.8919753
DO - https://doi.org/10.1109/ALLERTON.2019.8919753
M3 - منشور من مؤتمر
T3 - 2019 57th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2019
SP - 887
EP - 894
BT - 2019 57th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2019
Y2 - 24 September 2019 through 27 September 2019
ER -