The ability to generalize across visual domains is crucial for the robustness of artificial recognition systems. Although many training sources may be available in real contexts, the access to even unlabeled target samples cannot be taken for granted, which makes standard unsupervised domain adaptation methods inapplicable in the wild. In this work we investigate how to exploit multiple sources by hallucinating a deep visual domain composed of images, possibly unrealistic, able to maintain categorical knowledge while discarding specific source styles. The produced agnostic images are the result of a deep architecture that applies pixel adaptation on the original source data guided by two adversarial domain classifier branches at image and feature level. Our approach is conceived to learn only from source data, but it seamlessly extends to the use of unlabeled target samples. Remarkable results for both multi-source domain adaptation and domain generalization support the power of hallucinating agnostic images in this framework.

Hallucinating Agnostic Images to Generalize Across Domains / Maria Carlucci, Fabio; Russo, Paolo; Tommasi, Tatiana; Caputo, Barbara. - ELETTRONICO. - (2019), pp. 3227-3234. (Intervento presentato al convegno International Conference on Computer Vision tenutosi a Seoul ( South Korea) nel 27-28 Oct. 2019) [10.1109/ICCVW.2019.00403].

Hallucinating Agnostic Images to Generalize Across Domains

Tatiana Tommasi;Barbara Caputo
2019

Abstract

The ability to generalize across visual domains is crucial for the robustness of artificial recognition systems. Although many training sources may be available in real contexts, the access to even unlabeled target samples cannot be taken for granted, which makes standard unsupervised domain adaptation methods inapplicable in the wild. In this work we investigate how to exploit multiple sources by hallucinating a deep visual domain composed of images, possibly unrealistic, able to maintain categorical knowledge while discarding specific source styles. The produced agnostic images are the result of a deep architecture that applies pixel adaptation on the original source data guided by two adversarial domain classifier branches at image and feature level. Our approach is conceived to learn only from source data, but it seamlessly extends to the use of unlabeled target samples. Remarkable results for both multi-source domain adaptation and domain generalization support the power of hallucinating agnostic images in this framework.
File in questo prodotto:
File Dimensione Formato  
Carlucci_Hallucinating_Agnostic_Images_to_Generalize_Across_Domains_ICCVW_2019_paper(1).pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 409.5 kB
Formato Adobe PDF
409.5 kB Adobe PDF Visualizza/Apri
09022393.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 377.3 kB
Formato Adobe PDF
377.3 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2785359