TY - GEN
T1 - Cross-supervised synthesis of web-crawlers
AU - Omari, Adi
AU - Shoham, Sharon
AU - Yahav, Eran
N1 - Publisher Copyright: © 2016 ACM.
PY - 2016/5/14
Y1 - 2016/5/14
N2 - A web-crawler is a program that automatically and systematically tracks the links of a website and extracts information from its pages. Due to the different formats of websites, the crawling scheme for different sites can differ dramatically. Manually customizing a crawler for each specific site is time consuming and error-prone. Furthermore, because sites periodically change their format and presentation, crawling schemes have to be manually updated and adjusted. In this paper, we present a technique for automatic synthesis of web-crawlers from examples. The main idea is to use hand-crafted (possibly partial) crawlers for some websites as the basis for crawling other sites that contain the same kind of information. Technically, we use the data on one site to identify data on another site. We then use the identified data to learn the website structure and synthesize an appropriate extraction scheme. We iterate this process, as synthesized extraction schemes result in additional data to be used for re-learning the website structure. We implemented our approach and automatically synthesized 30 crawlers for websites from nine different categories: books, TVs, conferences, universities, cameras, phones, movies, songs, and hotels.
AB - A web-crawler is a program that automatically and systematically tracks the links of a website and extracts information from its pages. Due to the different formats of websites, the crawling scheme for different sites can differ dramatically. Manually customizing a crawler for each specific site is time consuming and error-prone. Furthermore, because sites periodically change their format and presentation, crawling schemes have to be manually updated and adjusted. In this paper, we present a technique for automatic synthesis of web-crawlers from examples. The main idea is to use hand-crafted (possibly partial) crawlers for some websites as the basis for crawling other sites that contain the same kind of information. Technically, we use the data on one site to identify data on another site. We then use the identified data to learn the website structure and synthesize an appropriate extraction scheme. We iterate this process, as synthesized extraction schemes result in additional data to be used for re-learning the website structure. We implemented our approach and automatically synthesized 30 crawlers for websites from nine different categories: books, TVs, conferences, universities, cameras, phones, movies, songs, and hotels.
UR - http://www.scopus.com/inward/record.url?scp=84971442621&partnerID=8YFLogxK
U2 - https://doi.org/10.1145/2884781.2884842
DO - https://doi.org/10.1145/2884781.2884842
M3 - منشور من مؤتمر
T3 - Proceedings - International Conference on Software Engineering
SP - 368
EP - 379
BT - Proceedings - 2016 IEEE/ACM 38th IEEE International Conference on Software Engineering Companion, ICSE 2016
PB - IEEE Computer Society
T2 - 2016 IEEE/ACM 38th IEEE International Conference on Software Engineering, ICSE 2016
Y2 - 14 May 2016 through 22 May 2016
ER -