Skeleton-based human action recognition has achieved a great interest in recent years, as skeleton data has been demonstrated to be robust to illumination changes, body scales, dynamic camera views, and complex background. Nevertheless, an effective encoding of the latent information underlying the 3D skeleton is still an open problem. In this work, we propose a novel Spatial-Temporal Transformer network (ST-TR) which models dependencies between joints using the Transformer self-attention operator. In our ST-TR model, a Spatial Self-Attention module (SSA) is used to understand intra-frame interactions between different body parts, and a Temporal Self-Attention module (TSA) to model inter-frame correlations. The two are combined in a two-stream network which outperforms state-of-the-art models using the same input data on both NTU-RGB+D 60 and NTU-RGB+D 120.
Spatial Temporal Transformer Network for Skeleton-Based Action Recognition / Plizzari, C.; Cannici, M.; Matteucci, M.. - ELETTRONICO. - 12663:(2021), pp. 694-701. (Intervento presentato al convegno 25th International Conference on Pattern Recognition Workshops, ICPR 2020 tenutosi a ita nel 2021) [10.1007/978-3-030-68796-0_50].
Spatial Temporal Transformer Network for Skeleton-Based Action Recognition
Plizzari C.;
2021
Abstract
Skeleton-based human action recognition has achieved a great interest in recent years, as skeleton data has been demonstrated to be robust to illumination changes, body scales, dynamic camera views, and complex background. Nevertheless, an effective encoding of the latent information underlying the 3D skeleton is still an open problem. In this work, we propose a novel Spatial-Temporal Transformer network (ST-TR) which models dependencies between joints using the Transformer self-attention operator. In our ST-TR model, a Spatial Self-Attention module (SSA) is used to understand intra-frame interactions between different body parts, and a Temporal Self-Attention module (TSA) to model inter-frame correlations. The two are combined in a two-stream network which outperforms state-of-the-art models using the same input data on both NTU-RGB+D 60 and NTU-RGB+D 120.| File | Dimensione | Formato | |
|---|---|---|---|
| Workshop_FBE_paper.pdf accesso aperto 
											Descrizione: Articolo principale
										 
											Tipologia:
											2. Post-print / Author's Accepted Manuscript
										 
											Licenza:
											
											
												Pubblico - Tutti i diritti riservati
												
												
												
											
										 
										Dimensione
										609.42 kB
									 
										Formato
										Adobe PDF
									 | 609.42 kB | Adobe PDF | Visualizza/Apri | 
| Plizzari2021_Chapter_SpatialTemporalTransformerNetw.pdf accesso riservato 
											Tipologia:
											2a Post-print versione editoriale / Version of Record
										 
											Licenza:
											
											
												Non Pubblico - Accesso privato/ristretto
												
												
												
											
										 
										Dimensione
										368.29 kB
									 
										Formato
										Adobe PDF
									 | 368.29 kB | Adobe PDF | Visualizza/Apri Richiedi una copia | 
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2922032
			
		
	
	
	
			      	