Subjective experiments are important for developing objective Video Quality Measures (VQMs). However, they are time-consuming and resource-demanding. In this context, being able to reuse existing subjective data on previous video coding standards to train models capable of predicting the perceptual quality of video content processed with newer codecs acquires significant importance. This paper investigates the possibility of generating an HEVC encoded Processed Video Sequence (PVS) in such a way that its perceptual quality is as similar as possible to that of an AVC encoded PVS whose quality has already been assessed by human subjects. In this way, the perceptual quality of the newly generated HEVC encoded PVS may be annotated approximately with the Mean Opinion Score (MOS) of the related AVC encoded PVS. To show the effectiveness of our approach, we compared the performance of a simple and low complexity but yet effective no reference hybrid model trained on the data generated with our approach with the same model trained on data collected in the context of a pristine subjective experiment. In addition, we merged seven subjective experiments such that they can be used as one aligned dataset containing either original HEVC bitstreams or the newly generated data explained in our proposed approach. The merging process accounts for the differences in terms of quality scale, chosen assessment method and context influence factors. This yields a large annotated dataset of HEVC sequences that is made publicly available for the design and training of no reference hybrid VQMs for HEVC encoded content.

How to Train No Reference Video Quality Measures for New Coding Standards using Existing Annotated Datasets? / FOTIO TIOTSOP, Lohic; Mizdos, Tomas; Masala, Enrico; Barkowsky, Marcus; Pocta, Peter. - ELETTRONICO. - (2021), pp. 1-6. (Intervento presentato al convegno IEEE 23rd International Workshop on Multimedia Signal Processing (MMSP 2021) tenutosi a Tampere, Finland nel October 06-08, 2021) [10.1109/MMSP53017.2021.9733456].

How to Train No Reference Video Quality Measures for New Coding Standards using Existing Annotated Datasets?

Lohic Fotio, Tiotsop;Enrico, Masala;
2021

Abstract

Subjective experiments are important for developing objective Video Quality Measures (VQMs). However, they are time-consuming and resource-demanding. In this context, being able to reuse existing subjective data on previous video coding standards to train models capable of predicting the perceptual quality of video content processed with newer codecs acquires significant importance. This paper investigates the possibility of generating an HEVC encoded Processed Video Sequence (PVS) in such a way that its perceptual quality is as similar as possible to that of an AVC encoded PVS whose quality has already been assessed by human subjects. In this way, the perceptual quality of the newly generated HEVC encoded PVS may be annotated approximately with the Mean Opinion Score (MOS) of the related AVC encoded PVS. To show the effectiveness of our approach, we compared the performance of a simple and low complexity but yet effective no reference hybrid model trained on the data generated with our approach with the same model trained on data collected in the context of a pristine subjective experiment. In addition, we merged seven subjective experiments such that they can be used as one aligned dataset containing either original HEVC bitstreams or the newly generated data explained in our proposed approach. The merging process accounts for the differences in terms of quality scale, chosen assessment method and context influence factors. This yields a large annotated dataset of HEVC sequences that is made publicly available for the design and training of no reference hybrid VQMs for HEVC encoded content.
2021
978-1-6654-3287-0
File in questo prodotto:
File Dimensione Formato  
FOTIO_ET_ALL_MMSP_2021-4.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 190.52 kB
Formato Adobe PDF
190.52 kB Adobe PDF Visualizza/Apri
9733456_How_to_Train_No_Reference_Video_Quality_Measures_for_New_Coding_Standards_using_Existing_Annotated_Datasets.pdf

non disponibili

Descrizione: Versione editoriale
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 881.22 kB
Formato Adobe PDF
881.22 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2924852