IRIS Pol. Torinohttps://iris.polito.itIl sistema di repository digitale IRIS acquisisce, archivia, indicizza, conserva e rende accessibili prodotti digitali della ricerca.Sun, 15 Sep 2019 07:47:39 GMT2019-09-15T07:47:39Z10141Classification and analysis of communication protection policy anomalieshttp://hdl.handle.net/11583/2673838Titolo: Classification and analysis of communication protection policy anomalies
Abstract: This paper presents a classification of the anomalies that can appear when designing or implementing communication protection policies. Together with the already known intra- and inter-policy anomaly types, we introduce a novel category, the inter-technology anomalies, related to security controls implementing different technologies, both within the same network node and among different network nodes. Through an empirical assessment, we prove the practical significance of detecting this new anomaly class. Furthermore, this paper introduces a formal model, based on first-order logic rules that analyses the network topology and the security controls at each node to identify the detected anomalies and suggest the strategies to resolve them. This formal model has manageable computational complexity and its implementation has shown excellent performance and good scalability.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/11583/26738382017-01-01T00:00:00ZAssessing network authorization policies via reachability analysishttp://hdl.handle.net/11583/2671778Titolo: Assessing network authorization policies via reachability analysis
Abstract: Evaluating if a computer network only permits allowed business operations without transmitting unwanted or malicious traffic is a crucial security task. Reachability analysis – the process that evaluates allowed communications – is a tool useful not only to discover security issues but also to identify network misconfigurations. This paper presents a novel approach to quantify network reachability based on the concept of equivalent firewall – a fictitious device, ideally connected directly to the communicating peers and whose policy summarizes the network behaviour between them – that can be queried to derive reachability information. We build equivalent firewalls by using a mathematical model that supports a large variety of network security controls (like NAT, NAPT, tunnels and filters up to the application layer) and allows an accurate analysis. The presented approach is efficient and highly scalable, as confirmed by tests with a large corporate network as well as synthetic networks.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/11583/26717782017-01-01T00:00:00ZCan a Light Typing Discipline Be Compatible with an Efficient Implementation of Finite Fields Inversion?http://hdl.handle.net/11583/2514482Titolo: Can a Light Typing Discipline Be Compatible with an Efficient Implementation of Finite Fields Inversion?
Abstract: We show that an algorithm implementing the Binary-Field Arithmetic operation of multiplicative inversion exists as a purely functional term which is typeable in Dual Light Affine Logic (DLAL). As a consequence, the set ΛDLAL of functional terms typeable in DLAL is large enough to program the whole set of arithmetic operations. Second, and most important, we show
that ΛDLAL can be seen a domain specific language that forces programmer to think about algorithms under a non standard mental pattern which may result in more essential descriptions of known algorithms which, also, may be more efficient.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/11583/25144822014-01-01T00:00:00ZEstimating Software Obfuscation Potency with Artificial Neural Networkshttp://hdl.handle.net/11583/2680443Titolo: Estimating Software Obfuscation Potency with Artificial Neural Networks
Abstract: This paper presents an approach to estimate the potency of obfuscation techniques. Our approach uses neural networks to accurately predict the value of complexity metrics – which are used to compute the potency – after an obfuscation transformation is applied to a code region. This work is the first step towards a decision support to optimally protect software applications.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/11583/26804432017-01-01T00:00:00ZLight combinators for finite fields arithmetichttp://hdl.handle.net/11583/2623786Titolo: Light combinators for finite fields arithmetic
Abstract: This work completes the definition of a library which provides the basic arithmetic
operations in binary finite fields as a set of functional terms with very specific features.
Such a functional terms have type in Typeable Functional Assembly (TFA). TFA is an
extension of Dual Light Affine Logic (DLAL). DLAL is a type assignment designed under the
prescriptions of Implicit Computational Complexity (ICC), which characterises polynomial
time costing computations.
We plan to exploit the functional programming patterns of the terms in the library to
implement cryptographic primitives whose running-time efficiency can be obtained by
means of the least hand-made tuning as possible.
We propose the library as a benchmark. It fixes a kind of lower bound on the difficulty
of writing potentially interesting low cost programs inside languages that can express only
computations with predetermined complexity. In principle, every known and future ICC
compliant programming language for polynomially costing computations should supply a
simplification over the encoding of the library we present, or some set of combinators of
comparable interest and difficulty.
We finally report on the applicative outcome that our library has and which is a reward
we get by programming in the very restrictive scenario that TFA provides. The term of
TFA which encodes the inversion in binary fields suggested us a variant of a known and
efficient imperative implementation of the inversion itself given by Fong. Our variant, can
outperform Fong’s implementation of inversion on specific hardware architectures.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/11583/26237862015-01-01T00:00:00ZA unified ontology for the virtualization domainhttp://hdl.handle.net/11583/2482979Titolo: A unified ontology for the virtualization domain
Abstract: This paper presents an ontology of virtual appliances and networks, along with an ontology-based approach for the automatic assessment of a virtualized computer network configuration. The ontology is inspired by the Libvirt XML format, based on the formal logic structures provided by the OWL language and enriched with logic rules expressed in SWRL. It can be used as a general taxonomy of the virtualized resources. We demonstrate the validity of our solution by showing the results of several analyses performed on a test system using a standard OWL-DL reasoner.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/11583/24829792011-01-01T00:00:00ZAutomatic discovery of software attacks via backward reasoninghttp://hdl.handle.net/11583/2615485Titolo: Automatic discovery of software attacks via backward reasoning
Abstract: Security risk management and mitigation are two of the most important items on several companies’ agendas. In this scenario, software attacks pose a major threat to the reliable execution of services, thus bringing negative effects on businesses. This paper presents a formal model that allows the identiﬁcation of all the attacks against the assets embedded in a software application. Our approach can be used to perform the identiﬁcation of the threats that loom over the assets and help to determine the potential countermeasures, that is the protections to deploy for mitigating the risks. The proposed model uses a Knowledge Base to represent the software assets, the steps that can be executed to mount an attack and their relationships. Inference rules permit the automatic discovery of attack step combinations towards the compromised assets that are discovered using a backward programming methodology. This approach is very usable as the attack discovery is fully automatic, once the Knowledge Base is populated with the information regarding the application to protect. In addition, it has been proven highly efﬁcient and exhaustive.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/11583/26154852015-01-01T00:00:00ZImproved reachability analysis for security managementhttp://hdl.handle.net/11583/2504319Titolo: Improved reachability analysis for security management
Abstract: Network reachability analysis evaluates the actual connectivity of an IT infrastructure. It can be performed by active network probing or examining a formal model of a target IT infrastructure. The latter approach is preferrable as it does not interfere with the normal network behaviour and can be easily used during development and change management phases. In this paper we propose a novel modelling approach, based on a geometric representation of device configurations (i.e. the policies) which permits the computation of the reachability using the concept of equivalent firewall. An equivalent firewall is a fictitious device, ideally connected directly to the communication endpoints, that summarizes the network behaviour between them. Our model supports routing, filtering and address translation devices in a computationally effective way. In fact, the experimental results show that the computation of equivalent firewalls is performed in a negligible time and that afterwards the reachability queries are answered in few seconds.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/11583/25043192013-01-01T00:00:00ZTowards Optimally Hiding Protected Assets in Software Applicationshttp://hdl.handle.net/11583/2679343Titolo: Towards Optimally Hiding Protected Assets in Software Applications
Abstract: Software applications contain valuable assets that, if compromised, can make the security of users at stake and cause huge monetary losses for software developers. Software protections are applied whenever assets’ security is at risk as they delay successful attacks. Unfortunately, protections might have recognizable fingerprints that can expose the location of the assets, thus facilitating the attackers’ job. This paper presents a novel approach that uses three main methods to hide the protected assets: protection fingerprint replication, enlargement, and shadowing. The best way to hide assets is determined with a Mixed Integer Linear Program, which is automatically built starting from the code structure, the protected assets, and a model that depicts the dependencies among protection and the fingerprints they generate. Additional constraints, such as overhead limits are also supported to ensure the usability of the protected applications. Our implementation, which uses off-the-shelf solvers, showed promising performance and scalability on large applications.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/11583/26793432017-01-01T00:00:00ZAutomatic generation of high speed elliptic curve cryptography codehttp://hdl.handle.net/11583/2652694Titolo: Automatic generation of high speed elliptic curve cryptography code
Abstract: Apparently, trust is a rare commodity when power, money or life itself are at stake. History is full of examples. Julius Caesar did not trust his generals, so that: ``If he had anything confidential to say, he wrote it in cipher, that is, by so changing the order of the letters of the alphabet, that not a word could be made out. If anyone wishes to decipher these, and get at their meaning, he must substitute the fourth letter of the alphabet, namely D, for A, and so with the others.''
And so the history of cryptography began moving its first steps. Nowadays, encryption has decayed from being an emperor's prerogative and became a daily life operation. Cryptography is pervasive, ubiquitous and, the best of all, completely transparent to the unaware user. Each time we buy something on the Internet we use it. Each time we search something on Google we use it. Everything without (almost) realizing that it silently protects our privacy and our secrets.
Encryption is a very interesting instrument in the "toolbox of security" because it has very few side effects, at least on the user side. A particularly important one is the intrinsic slow down that its use imposes in the communications. High speed cryptography is very important for the Internet, where busy servers proliferate. Being faster is a double advantage: more throughput and less server overhead. In this context, however, the public key algorithms starts with a big handicap. They have very bad performances if compared to their symmetric counterparts. Due to this reason their use is often reduced to the essential operations, most notably key exchanges and digital signatures. The high speed public key cryptography challenge is a very practical topic with serious repercussions in our technocentric world. Using weak algorithms with a reduced key length to increase the performances of a system can lead to catastrophic results.
In 1985, Miller and Koblitz independently proposed to use the group of rational points of an elliptic curve over a finite field to create an asymmetric algorithm. Elliptic Curve Cryptography (ECC) is based on a problem known as the ECDLP (Elliptic Curve Discrete Logarithm Problem) and offers several advantages with respect to other more traditional encryption systems such as RSA and DSA. The main benefit is that it requires smaller keys to provide the same security level since breaking the ECDLP is much harder. In addition, a good ECC implementation can be very efficient both in time and memory consumption, thus being a good candidate for performing high speed public key cryptography. Moreover, some elliptic curve based techniques are known to be extremely resilient to quantum computing attacks, such as the SIDH (Supersingular Isogeny Diffie-Hellman).
Traditional elliptic curve cryptography implementations are optimized by hand taking into account the mathematical properties of the underlying algebraic structures, the target machine architecture and the compiler facilities. This process is time consuming, requires a high degree of expertise and, ultimately, error prone. This dissertation' ultimate goal is to automatize the whole optimization process of cryptographic code, with a special focus on ECC. The framework presented in this thesis is able to produce high speed cryptographic code by automatically choosing the best algorithms and applying a number of code-improving techniques inspired by the compiler theory. Its central component is a flexible and powerful compiler able to translate an algorithm written in a high level language and produce a highly optimized C code for a particular algebraic structure and hardware platform. The system is generic enough to accommodate a wide array of number theory related algorithms, however this document focuses only on optimizing primitives based on elliptic curves defined over binary fields.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/11583/26526942016-01-01T00:00:00Z