The training and performance analysis of objective video quality assessment algorithms is complex due to the huge variety of possible content classes and transmission distortions. Several secondary issues such as free parameters in machine learning algorithms and alignment of subjective datasets put an additional burden on the developer. In this paper, three subsequent steps are presented to address such issues. First, the content and coding parameter space of a large-scale database is used to select dedicated subsets for training objective algorithms. This aims at providing a method for selecting the most significant contents and coding parameters from all imaginable combinations. In the practical case where only a limited set is available, it also helps us to avoid redundancy in the training subset selection. The second step is a discussion on performance measures for algorithms that employ machine-learning methods. The particularity of the performance measures is that the quality of the training and verification datasets is taken into consideration. Common issues that often use existing measures are presented, and improved or complementary methods are proposed. The measures are applied to two examples of no-reference objective assessment algorithms using the aforementioned subsets of the large-scale database. While limited in terms of practical applications, this sandbox approach of objectively predicting an objectively evaluated video sequences allows for eliminating additional influence factors from subjective studies. In the third step, the proposed performance measures are applied to the practical case of training and analyzing assessment algorithms on readily available subjectively annotated image datasets. The presentation method in this part of the paper can also be used as an exemplified recommendation for reporting in-depth information on the performance. Using this presentation method, future publications presenting newly developed quality assessment algorithms may be significantly improved.

Improved Performance Measures for Video Quality Assessment Algorithms Using Training and Validation Sets / Aldahdooh, A.; Masala, E.; Janssens, O.; Van Wallendael, G.; Barkowsky, M.; Le Callet, P.. - In: IEEE TRANSACTIONS ON MULTIMEDIA. - ISSN 1520-9210. - STAMPA. - 21:8(2019), pp. 2026-2041. [10.1109/TMM.2018.2882091]

Improved Performance Measures for Video Quality Assessment Algorithms Using Training and Validation Sets

E. Masala;
2019

Abstract

The training and performance analysis of objective video quality assessment algorithms is complex due to the huge variety of possible content classes and transmission distortions. Several secondary issues such as free parameters in machine learning algorithms and alignment of subjective datasets put an additional burden on the developer. In this paper, three subsequent steps are presented to address such issues. First, the content and coding parameter space of a large-scale database is used to select dedicated subsets for training objective algorithms. This aims at providing a method for selecting the most significant contents and coding parameters from all imaginable combinations. In the practical case where only a limited set is available, it also helps us to avoid redundancy in the training subset selection. The second step is a discussion on performance measures for algorithms that employ machine-learning methods. The particularity of the performance measures is that the quality of the training and verification datasets is taken into consideration. Common issues that often use existing measures are presented, and improved or complementary methods are proposed. The measures are applied to two examples of no-reference objective assessment algorithms using the aforementioned subsets of the large-scale database. While limited in terms of practical applications, this sandbox approach of objectively predicting an objectively evaluated video sequences allows for eliminating additional influence factors from subjective studies. In the third step, the proposed performance measures are applied to the practical case of training and analyzing assessment algorithms on readily available subjectively annotated image datasets. The presentation method in this part of the paper can also be used as an exemplified recommendation for reporting in-depth information on the performance. Using this presentation method, future publications presenting newly developed quality assessment algorithms may be significantly improved.
File in questo prodotto:
File Dimensione Formato  
author_copy_final_submitted_TMM2018.pdf

accesso aperto

Descrizione: Versione autori, final draft post-refereeing
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 2.37 MB
Formato Adobe PDF
2.37 MB Adobe PDF Visualizza/Apri
FINAL_PUBLISHED_TMM2019_08540075.pdf

non disponibili

Descrizione: Versione editoriale
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 7.57 MB
Formato Adobe PDF
7.57 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2781792