ShapeLearner: Towards Shape-Based Visual Knowledge Harvesting

Huayong Xu, Yafang Wang, Kang Feng, Gerard De Melo, Wei Wu, Andrei Sharf, Baoquan Chen

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review


The deluge of images on the Web has led to a number of efforts to organize images semantically and mine visual knowledge. Despite enormous progress on categorizing entire images or bounding boxes, only few studies have targeted fine-grained image understanding at the level of specific shape contours. For instance, beyond recognizing that an image portrays a cat, we may wish to distinguish its legs, head, tail, and so on. To this end, we present ShapeLearner, a system that acquires such visual knowledge about object shapes and their parts in a semantic taxonomy, and then is able to exploit this hierarchy in order to analyze new kinds of objects that it has not observed before. ShapeLearner jointly learns this knowledge from sets of segmented images. The space of label and segmentation hypotheses is pruned and then evaluated using Integer Linear Programming. Experiments on a variety of shape classes show the accuracy and effectiveness of our method.

Original languageAmerican English
Title of host publicationFrontiers in Artificial Intelligence and Applications
EditorsGal A. Kaminka, Maria Fox, Paolo Bouquet, Eyke Hullermeier, Virginia Dignum, Frank Dignum, Frank van Harmelen
PublisherIOS Press BV
Number of pages9
ISBN (Electronic)9781614996712
ISBN (Print)978-1-61499-671-2
StatePublished - 2016
Event22nd European Conference on Artificial Intelligence, ECAI 2016 - The Hague, Netherlands
Duration: 29 Aug 20162 Sep 2016

Publication series

NameFrontiers in Artificial Intelligence and Applications


Conference22nd European Conference on Artificial Intelligence, ECAI 2016
CityThe Hague

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence


Dive into the research topics of 'ShapeLearner: Towards Shape-Based Visual Knowledge Harvesting'. Together they form a unique fingerprint.

Cite this