Skip to main navigation Skip to search Skip to main content

Learning Sparser Perceptron Models

Research output: Contribution to journalArticlepeer-review

Abstract

The averaged-perceptron learning algorithm is simple, versatile and effective. However, when used in NLP settings it tends to produce very dense solutions, while much sparser ones are also possible. We present a simple modification to the perceptron algorithm which allows it to produce sparser solutions while remaining accurate and computationally efficient. We test the method on a multiclass classification task, a structured prediction task, and a guided learning task. In all of the experiments the method produced models which are about 4-5 times smaller than the averaged perceptron, while remaining as accurate.
Original languageAmerican English
JournalAcl
StatePublished - 2011

Cite this