compression

Sorting a file to optimize for compression efficiency

孤街浪徒 提交于 2019-12-10 15:13:51
问题 We have some large data files that are being concatenated, compressed, and then sent to another server. The compression reduces the transmission time to the destination server, so the smaller we can get the file in a short period of time, the better. This is a highly time-sensitive process. The data files contain many rows of tab-delimited text, and the order of the rows does not matter. We noticed that when we sorted the file by the first field, the compressed file size was much smaller,

How correlation helps in compression

我们两清 提交于 2019-12-10 14:42:22
问题 Hy, I would like to know how correlation between pixels in image helps in compression? Also, why I would like to reduce correlation between pixels to get better compression (I found that information in one literature, but I don't understand it)? My last question is: If correlation is R=0.9, how this information can help in compressing? Thanks. 回答1: I'll make an example. Let's say that every pixel is very much correlated with the pixel above it. Instead of compressing the pixel values directly

Downscaling JPG during JPG decompression

元气小坏坏 提交于 2019-12-10 14:23:11
问题 I have to downscale and decompress a set of JPG images of size 4608 x 3456. Currently, I have already been able to decompress the images correctly to RGB format and convert them to a Bitmap. Now I need to implement the downscale, and from what I've read so far to downscale an image correctly one should use Bilinear Interpolation. Then I should replace the pixels (2x2) that were used for the interpolation by the resulted pixel from the interpolation. I need the about at about 1/4 of its

Using JavaScript to inflate a blob?

♀尐吖头ヾ 提交于 2019-12-10 14:21:15
问题 I'm using a websocket to send a Jpeg image in blob form. I've compressed (gzipped) the blob but I'm unable to find a way to decompress it using JavaScript. Does anyone know how? This is the code I'm using to read the blob and then convert it into a base64 string: var r = new FileReader(); r.onload = function(){ draw(btoa(r.result)); }; r.readAsBinaryString(e.data); The draw() function basically draws the image using the base64 string, onto a HTML5 canvas. I've found this library to inflate

How to use multiple threads for zlib compression (same input source)

拈花ヽ惹草 提交于 2019-12-10 14:09:18
问题 My goal is to compress the data of the same source in parallel threads. I have defined jobs which are in a list, these jobs have the read information(500kb-1MB in each job). My compressor threads will compress each block's data using ZLIB and store it in the outbuf of the corresponding jobs. Now, I want to ,merge all this and create one output file which is of standard ZLIB format. From the ZLIB RFC and after browsing the source of pigzee, I understand that A ZLIB header is like below +---+--

Ionic 3 compress image

…衆ロ難τιáo~ 提交于 2019-12-10 13:25:07
问题 I've been trying to compress an image client-side with Ionic 3 for 2 days now. I have tried: ng2-img-max - throws an error when using the blue-imp-canvas-to-blob canvas.toBlob() method (a dependency of ng2-img-max ). It was only telling me what line the error was happening on. I think I have read that creating an HTMLCanvasElement in Ionic isn't possible - something to do with webworkers . Ahdin - JS library JIC - JS library TinyJPG - npm module These all threw various errors, after

Compression with best ratio in Python?

冷暖自知 提交于 2019-12-10 13:05:21
问题 Which compression method in Python has the best compression ratio? Is the commonly used zlib.compress() the best or are there some better options? I need to get the best compression ratio possible. I am compresing strings and sending them over UDP. A typical string I compress has about 1,700,000 bytes. 回答1: I'm sure that there might be some more obscure formats with better compression, but lzma is the best, of those that are well supported. There are some python bindings here. EDIT Don't pick

Downloading large file in python error: Compressed file ended before the end-of-stream marker was reached

假装没事ソ 提交于 2019-12-10 12:54:05
问题 I am downloading a compressed file from the internet: with lzma.open(urllib.request.urlopen(url)) as file: for line in file: ... After having downloaded and processed a a large part of the file, I eventually get the error: File "/usr/lib/python3.4/lzma.py", line 225, in _fill_buffer raise EOFError("Compressed file ended before the " EOFError: Compressed file ended before the end-of-stream marker was reached I am thinking that it might be caused by an internet connection that drops or the

SQL Server 2008 Backup Compression Standard Edition

妖精的绣舞 提交于 2019-12-10 12:30:53
问题 I'm trying to backup a database in SQL Server 2008 and have the database compressed using the new compression feature. However, when I run the following code, I get a weird error message: Backup Database <Database> To Disk 'C:\Backup' With Compression I get this error message: Backup Database With Compression is not supported on Standard Edition Does this mean I have to upgrade to the full version or is there a way to enable compression in standard edition? 回答1: Backup Compression is not

Worst PNG compression scenario

。_饼干妹妹 提交于 2019-12-10 12:19:00
问题 I am using libpng to convertraw image data (3 channel, 8 bit, no metadata) to PNG and store it in a buffer. I now have a problem to allocate the right amount of buffer space for writing the PNG data to it. It is clear to me, that the compressed data might be larger than the raw data (cf. the overhead for a 1x1 image) Is there any general rule for an upper margin of the compressed data size with respect to the image size and the different filtering/compression options? If that is too generic,