This paper proposes a data compression scheme for minimizing memory traffic in processor-based systems. Data compression and decompression are performed on-the-fly on the cache-to-memory path, that is, uncompressed cache lines are compressed before they are written back to main memory, and decompressed when cache refills take place. The distinguishing feature of the presented solution is its ability of providing high memory traffic reductions without requiring data profiling information. In other words, thanks to the self-learning mechanism it implements, the proposed scheme performs very closely to special-purpose compression approaches, whose main limitation is their inapplicability when off-line data profiling is not feasible. Memory traffic reductions in the cache-to-memory path of a core-based system running standard benchmark programs are, on average, around 34%, and are thus close to those achievable with profile-driven compression.

An Adaptive Data Compression Scheme for Memory Traffic Minimization in Processor-Based Systems / Benini, L; Bruni, D; Ricco, B; Macii, Alberto; Macii, Enrico. - 4:(2002), pp. 866-869. ((Intervento presentato al convegno ISCAS-02: IEEE International Conference on Circuits and Systems tenutosi a Phoenix, AZ [10.1109/ISCAS.2002.1010595].

An Adaptive Data Compression Scheme for Memory Traffic Minimization in Processor-Based Systems

MACII, Alberto;MACII, Enrico
2002

Abstract

This paper proposes a data compression scheme for minimizing memory traffic in processor-based systems. Data compression and decompression are performed on-the-fly on the cache-to-memory path, that is, uncompressed cache lines are compressed before they are written back to main memory, and decompressed when cache refills take place. The distinguishing feature of the presented solution is its ability of providing high memory traffic reductions without requiring data profiling information. In other words, thanks to the self-learning mechanism it implements, the proposed scheme performs very closely to special-purpose compression approaches, whose main limitation is their inapplicability when off-line data profiling is not feasible. Memory traffic reductions in the cache-to-memory path of a core-based system running standard benchmark programs are, on average, around 34%, and are thus close to those achievable with profile-driven compression.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Caricamento pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/1497387
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo