Today, a wide range of domains encompassing, e.g., movie and video game production, virtual reality simulations, augmented reality applications, make a massive use of 3D computer generated assets. Although many graphics suites already offer a large set of tools and functionalities to manage the creation of such contents, they are usually characterized by a steep learning curve. This aspect could make it difficult for non-expert users to create 3D scenes for, e.g., sharing their ideas or for prototyping purposes. This paper presents a computer-based system that is able to generate a possible reconstruction of a 3D scene depicted in a 2D image, by inferring objects, materials, textures, lights, and camera required for rendering. The integration of the proposed system into a well known graphics suite enables further refinements of the generated scene using traditional techniques. Moreover, the system allows the users to explore the scene into an immersive virtual environment for better understanding the current objects’ layout, and provides the possibility to convey emotions through specific aspects of the generated scene. The paper also reports the results of a user study that was carried out to evaluate the usability of the proposed system from different perspectives.

Automatic generation of affective 3D virtual environments from 2D images / Cannavo', Alberto; D'Alessandro, Arianna; Daniele, Maglione; Marullo, Giorgia; Congyi, Zhang; Lamberti, Fabrizio. - STAMPA. - (2020), pp. 113-124. (Intervento presentato al convegno 15th International Conference on Computer Graphics Theory and Applications (GRAPP 2020) tenutosi a Valletta, Malta nel February 27-29, 2020) [10.5220/0008951301130124].

Automatic generation of affective 3D virtual environments from 2D images

Alberto Cannavò;D'ALESSANDRO, ARIANNA;Giorgia Marullo;Fabrizio Lamberti
2020

Abstract

Today, a wide range of domains encompassing, e.g., movie and video game production, virtual reality simulations, augmented reality applications, make a massive use of 3D computer generated assets. Although many graphics suites already offer a large set of tools and functionalities to manage the creation of such contents, they are usually characterized by a steep learning curve. This aspect could make it difficult for non-expert users to create 3D scenes for, e.g., sharing their ideas or for prototyping purposes. This paper presents a computer-based system that is able to generate a possible reconstruction of a 3D scene depicted in a 2D image, by inferring objects, materials, textures, lights, and camera required for rendering. The integration of the proposed system into a well known graphics suite enables further refinements of the generated scene using traditional techniques. Moreover, the system allows the users to explore the scene into an immersive virtual environment for better understanding the current objects’ layout, and provides the possibility to convey emotions through specific aspects of the generated scene. The paper also reports the results of a user study that was carried out to evaluate the usability of the proposed system from different perspectives.
2020
978-989-758-402-2
File in questo prodotto:
File Dimensione Formato  
VISIGRAPP_2020_Volume_1_-_GRAPP paper only_compressed.pdf

non disponibili

Descrizione: Post-print versione editoriale
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 237.26 kB
Formato Adobe PDF
237.26 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
authors copy accepted paper Automatic Generation of Affective 3D Virtual Environments from 2D Images_compressed.pdf

accesso aperto

Descrizione: Post-print author's version
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 440.27 kB
Formato Adobe PDF
440.27 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2773852