A minimal variance estimator for the cardinality of big data set intersection

Reuven Cohen, Liran Katzir, Aviv Yehezkel

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In recent years there has been a growing interest in developing "streaming algorithms" for efficient processing and querying of continuous data streams. These algorithms seek to provide accurate results while minimizing the required storage and the processing time, at the price of a small inaccuracy in their output. A fundamental query of interest is the intersection size of two big data streams. This problem arises in many different application areas, such as network monitoring, database systems, data integration and information retrieval. In this paper we develop a new algorithm for this problem, based on the Maximum Likelihood (ML) method. We show that this algorithm outperforms all known schemes in terms of the estimation's quality (lower variance) and that it asymptotically achieves the optimal variance.

Original languageEnglish
Title of host publicationKDD 2017 - Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
Pages95-103
Number of pages9
ISBN (Electronic)9781450348874
DOIs
StatePublished - 13 Aug 2017
Event23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2017 - Halifax, Canada
Duration: 13 Aug 201717 Aug 2017

Publication series

NameProceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
VolumePart F129685

Conference

Conference23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2017
Country/TerritoryCanada
CityHalifax
Period13/08/1717/08/17

Keywords

  • Cardinality estimation
  • Data mining
  • Set intersection
  • Streaming algorithms

All Science Journal Classification (ASJC) codes

  • Software
  • Information Systems

Fingerprint

Dive into the research topics of 'A minimal variance estimator for the cardinality of big data set intersection'. Together they form a unique fingerprint.

Cite this