Technological advances have increased the complexity and volume of system operations. As a result, today systems face broader attack surfaces due to the larger amount of code executed. Ensuring the reliability of applications and operations in such environments requires well-defined security criteria, but establishing them is a non-trivial task. Trusted Computing offers mechanisms to establish trust in a non-repudiable manner, yet it faces inefficiencies and scalability challenges when applied to complex scenarios. In fact, defining a flexible verification method for a complex system is a challenging task. This is especially true when the process focuses solely on verifying individual actions rather than on the overall system’s behaviour. AI-based techniques offer a viable countermeasure, since modelling a system’s behaviour provides an adaptable approach to determining reliability. An AI model can also analyse a larger amount of data. However, collecting, securely storing, and provisioning relevant information is still a required feature. This work proposes two approaches for the secure collection of system events. The objective is to facilitate the provision of an AI model while ensuring the integrity and authenticity of the data.

Metrics Gathering for AI-based Trust Assessment / Ferro, Lorenzo; Ciravegna, Flavio; Zaritto, Francesco; Lioy, Antonio; Landeiro Ribeiro, Luis. - (In corso di stampa). (Intervento presentato al convegno The 3rd International Conference on Intelligent Computing, Communication, Networking and Services (ICCNS2025) tenutosi a Varna (BG) nel 1-4 September 2025).

Metrics Gathering for AI-based Trust Assessment

Ferro, Lorenzo;Ciravegna, Flavio;Zaritto, Francesco;Lioy, Antonio;
In corso di stampa

Abstract

Technological advances have increased the complexity and volume of system operations. As a result, today systems face broader attack surfaces due to the larger amount of code executed. Ensuring the reliability of applications and operations in such environments requires well-defined security criteria, but establishing them is a non-trivial task. Trusted Computing offers mechanisms to establish trust in a non-repudiable manner, yet it faces inefficiencies and scalability challenges when applied to complex scenarios. In fact, defining a flexible verification method for a complex system is a challenging task. This is especially true when the process focuses solely on verifying individual actions rather than on the overall system’s behaviour. AI-based techniques offer a viable countermeasure, since modelling a system’s behaviour provides an adaptable approach to determining reliability. An AI model can also analyse a larger amount of data. However, collecting, securely storing, and provisioning relevant information is still a required feature. This work proposes two approaches for the secure collection of system events. The objective is to facilitate the provision of an AI model while ensuring the integrity and authenticity of the data.
In corso di stampa
File in questo prodotto:
File Dimensione Formato  
ICCNS_camera_ready.pdf

accesso aperto

Descrizione: accepted paper pre-print
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 471.29 kB
Formato Adobe PDF
471.29 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3002794