No-Reference video quality assessment of H.264 video streams based on semantic saliency maps

H. Boujut, J. Benois-Pineau, T. Ahmed, O. Hadar, P. Bonnet

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The paper contributes to No-Reference video quality assessment of broadcasted HD video over IP networks and DVB. In this work we have enhanced our bottom-up spatio-temporal saliency map model by considering semantics of the visual scene. Thus we propose a new saliency map model based on face detection that we called semantic saliency map. A new fusion method has been proposed to merge the bottom-up saliency maps with the semantic saliency map. We show that our NR metric WMBER weighted by the spatio-temporal-semantic saliency map provides higher results then the WMBER weighted by the bottom-up spatio-temporal saliency map. Tests are performed on two H.264/AVC video databases for video quality assessment over lossy networks.

Original languageEnglish
Title of host publicationProceedings of SPIE-IS and T Electronic Imaging - Image Quality and System Performance IX
DOIs
StatePublished - 13 Feb 2012
EventImage Quality and System Performance IX - Burlingame, CA, United States
Duration: 24 Jan 201226 Jan 2012

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume8293

Conference

ConferenceImage Quality and System Performance IX
Country/TerritoryUnited States
CityBurlingame, CA
Period24/01/1226/01/12

All Science Journal Classification (ASJC) codes

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Computer Science Applications
  • Applied Mathematics
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'No-Reference video quality assessment of H.264 video streams based on semantic saliency maps'. Together they form a unique fingerprint.

Cite this