Redirigiendo al acceso original de articulo en 21 segundos...
Inicio  /  Algorithms  /  Vol: 12 Par: 6 (2019)  /  Artículo
ARTÍCULO
TITULO

Time-Universal Data Compression

Boris Ryabko    

Resumen

Nowadays, a variety of data-compressors (or archivers) is available, each of which has its merits, and it is impossible to single out the best ones. Thus, one faces the problem of choosing the best method to compress a given file, and this problem is more important the larger is the file. It seems natural to try all the compressors and then choose the one that gives the shortest compressed file, then transfer (or store) the index number of the best compressor (it requires log?? log m bits, if m is the number of compressors available) and the compressed file. The only problem is the time, which essentially increases due to the need to compress the file m times (in order to find the best compressor). We suggest a method of data compression whose performance is close to optimal, but for which the extra time needed is relatively small: the ratio of this extra time and the total time of calculation can be limited, in an asymptotic manner, by an arbitrary positive constant. In short, the main idea of the suggested approach is as follows: in order to find the best, try all the data compressors, but, when doing so, use for compression only a small part of the file. Then apply the best data compressors to the whole file. Note that there are many situations where it may be necessary to find the best data compressor out of a given set. In such a case, it is often done by comparing compressors empirically. One of the goals of this work is to turn such a selection process into a part of the data compression method, automating and optimizing it.

 Artículos similares

       
 
Daniel Althoff, Lineu Neiva Rodrigues and Demetrius David da Silva    
Small reservoirs play a key role in the Brazilian savannah (Cerrado), making irrigation feasible and contributing to the economic development and social well-being of the population. A lack of information on factors, such as evaporative water loss, has a... ver más
Revista: Water

 
Masoud Jafari Shalamzari, Wanchang Zhang, Atefeh Gholami and Zhijie Zhang    
Site selection for runoff harvesting at large scales is a very complex task. It requires inclusion and spatial analysis of a multitude of accurately measured parameters in a time-efficient manner. Compared with direct measurements of runoff, which is tim... ver más
Revista: Water

 
Juan Murillo-Morera, Carlos Castro-Herrera, Javier Arroyo, Ruben Fuentes-Fernandez     Pág. 114 - 137
Today, it is common for software projects to collect measurement data through development processes. With these data, defect prediction software can try to estimate the defect proneness of a software module, with the objective of assisting and guiding so... ver más

 
Damny Magdaleno Guevara, Yadriel Miranda, Ivett Fuentes, María Garc ía     Pág. 69 - 80
A huge amount of information is represented in XML format. Several tools have been developed to store, and query XML data. It becomes inevitable to develop high performance techniques for efficiently analysing extremely large collections of XML data. O... ver más

 
Muhammad Tayyab, Rana Ammar Aslam, Umar Farooq, Sikandar Ali, Shahbaz Nasir Khan, Mazhar Iqbal, Muhammad Imran Khan and Naeem Saddique    
Groundwater Arsenic (As) data are often sparse and location-specific, making them insufficient to represent the heterogeneity in groundwater quality status at unsampled locations. Interpolation techniques have been used to map groundwater As data at unsa... ver más
Revista: Water