In supervised learning, labeled data are provided as inputs and then learning is used to classify new observations. Error tolerance should be guaranteed for classifiers when they are employed in critical applications. A widely used type of classifiers is based on voting among instances (referred to as single voter classifiers) or multiple voters (referred to as ensemble classifiers). When the classifiers are implemented on a processor, Time-Based Modular Redundancy (TBMR) techniques are often used for protection due to the inflexibility of the hardware. In TBMR, any single error can be handled at the cost of additional computing either once for detection or twice for correction after detection; however, this technique increases the computation overhead by at least 100%. The Voting Margin (VM) scheme has recently been proposed to reduce the computation overhead of TBMR, but this scheme has only been utilized for k Nearest Neighbors (kNNs) classifiers with two classes. In this paper, the VM scheme is extended to multiple classes, as well as other voting classifiers by exploiting the intrinsic robustness of the algorithms. kNNs (that is a single voter classifier) and Random Forest (RF) (that is an ensemble classifier) are considered to evaluate the proposed scheme. Using multiple datasets, the results show that the proposed scheme significantly reduces the computation overhead by more than 70% for kNNs with good classification accuracy and by more than 90% for RF in all cases. However, when extended to multiple classes, the VM scheme for kNNs is not efficient for some datasets. In this paper, a new protection scheme referred to as k+1 NNs is presented as an alternative option to provide efficient protection in those scenarios. In the new scheme, the computation overhead can be further reduced at the cost of allowing a very low percentage of errors that can modify the classification outcome.

Error-Tolerant Computation for Voting Classifiers with Multiple Classes / Liu, Shanshan; Reviriego, Pedro; Montuschi, Paolo; Lombardi, Fabrizio. - In: IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY. - ISSN 0018-9545. - ELETTRONICO. - 69:11(2020), pp. 13718-13727. [10.1109/TVT.2020.3025739]

Error-Tolerant Computation for Voting Classifiers with Multiple Classes

Montuschi, Paolo;
2020

Abstract

In supervised learning, labeled data are provided as inputs and then learning is used to classify new observations. Error tolerance should be guaranteed for classifiers when they are employed in critical applications. A widely used type of classifiers is based on voting among instances (referred to as single voter classifiers) or multiple voters (referred to as ensemble classifiers). When the classifiers are implemented on a processor, Time-Based Modular Redundancy (TBMR) techniques are often used for protection due to the inflexibility of the hardware. In TBMR, any single error can be handled at the cost of additional computing either once for detection or twice for correction after detection; however, this technique increases the computation overhead by at least 100%. The Voting Margin (VM) scheme has recently been proposed to reduce the computation overhead of TBMR, but this scheme has only been utilized for k Nearest Neighbors (kNNs) classifiers with two classes. In this paper, the VM scheme is extended to multiple classes, as well as other voting classifiers by exploiting the intrinsic robustness of the algorithms. kNNs (that is a single voter classifier) and Random Forest (RF) (that is an ensemble classifier) are considered to evaluate the proposed scheme. Using multiple datasets, the results show that the proposed scheme significantly reduces the computation overhead by more than 70% for kNNs with good classification accuracy and by more than 90% for RF in all cases. However, when extended to multiple classes, the VM scheme for kNNs is not efficient for some datasets. In this paper, a new protection scheme referred to as k+1 NNs is presented as an alternative option to provide efficient protection in those scenarios. In the new scheme, the computation overhead can be further reduced at the cost of allowing a very low percentage of errors that can modify the classification outcome.
File in questo prodotto:
File Dimensione Formato  
09201471.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 2.45 MB
Formato Adobe PDF
2.45 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
FINAL VERSION tvt.pdf

accesso aperto

Descrizione: authors' accepted paper version
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 2.06 MB
Formato Adobe PDF
2.06 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2846080