compression

Python decompress gzip data in memory without file

跟風遠走 提交于 2019-12-01 23:24:42
I have gzipped data from HTTP reply. I have following code: def gzipDecode(self, content): import StringIO import gzip outFilePath = 'test' compressedFile = StringIO.StringIO(content) decompressedFile = gzip.GzipFile(fileobj=compressedFile) with open(outFilePath, 'w') as outfile: outfile.write(decompressedFile.read()) data = '' with open(outFilePath, 'r') as myfile: data=myfile.read().replace('\n', '') return data which decompress input gzipped content and return string (http reply is gzipped json). - It works. But I need it without creating test file - all in memory. I modified it to: def

Why do image compression algorithms process the image by sub-blocks?

谁说我不能喝 提交于 2019-12-01 23:05:56
For instance, consider the DFT or DCT. Precisely, what would be the differences between an image transformed by sub-blocks, and an image transformed whole? Is the resulting file size smaller? Is the algorithm more efficient? Does the transformed image look different? Thanks. ctrl-alt-delor They are designed so they can be implemented using parallel hardware. Each block is independent, and can be calculated on a different computing node, or shared out to as many nodes as you have. Also as noted in an answer to Why JPEG compression processes image by 8x8 blocks? the computational complexity is

Create zip file in memory from bytes (text with arbitrary encoding)

笑着哭i 提交于 2019-12-01 22:51:35
The application i'm developing needs to compress xml files into zip files and send them through http requests to a web service. As I dont need to keep the zip files, i'm just performing the compression in memory. The web service is denying my requests because the zip files are apparently malformed. I know there is a solution in this question which works perfectly, but it uses a StreamWriter . My problem with that solution is that StreamWriter requires an encoding or assumes UTF-8 , and I do not need to know the enconding of the xml files. I just need to read the bytes from those files, and

Prevent Apache from chunking gzipped content

匆匆过客 提交于 2019-12-01 22:41:26
When using mod_deflate in Apache2, Apache will chunk gzipped content, setting the Transfer-encoding: chunked header. While this results in a faster download time, I cannot display a progress bar. If I handle the compression myself in PHP, I can gzip it completely first and set the Content-length header, so that I can display a progress bar to the user. Is there any setting that would change Apache's default behavior, and have Apache set a Content-length header instead of chunking the response, so that I don't have to handle the compression myself? You could maybe play with the sendBufferSize

Delphi XE and ZLib Problems

﹥>﹥吖頭↗ 提交于 2019-12-01 19:50:47
问题 I'm in Delphi XE and I'm having some problems with ZLib routines... I'm trying to compress some strings (and encode it to send it via a SOAP webservice -not really important-) The string results from ZDecompressString differs used in ZCompressString. example1: uses ZLib; // compressing string // ZCompressString('1234567890', zcMax); // compressed string ='xÚ3426153·°4' // Uncompressing the result of ZCompressString, don't return the same: // ZDecompressString('xÚ3426153·°4'); // uncompressed

How to extract a rar file in C#?

故事扮演 提交于 2019-12-01 19:43:37
I want to extract .rar files using cmd shell so I wrote this code: string commandLine = @"c:\progra~1\winrar\winrar e c:\download\TestedU.rar c:\download"; ProcessStartInfo PSI = new ProcessStartInfo("cmd.exe"); PSI.RedirectStandardInput = true; PSI.RedirectStandardOutput = true; PSI.RedirectStandardError = true; PSI.UseShellExecute = false; Process p = Process.Start(PSI); StreamWriter SW = p.StandardInput; StreamReader SR = p.StandardOutput; SW.WriteLine(commandLine); SW.Close(); The first time it worked fine, the second time it displayed nothing. Use SevenZipSharp as it's a bit better way of

How do I list contents of a gz file without extracting it in python?

夙愿已清 提交于 2019-12-01 18:51:33
I have a .gz file and I need to get the name of files inside it using python. This question is the same as this one The only difference is that my file is .gz not .tar.gz so the tarfile library did not help me here I am using requests library to request a URL. The response is a compressed file. Here is the code I am using to download the file response = requests.get(line.rstrip(), stream=True) if response.status_code == 200: with open(str(base_output_dir)+"/"+str(current_dir)+"/"+str(count)+".gz", 'wb') as out_file: shutil.copyfileobj(response.raw, out_file) del response This code downloads

File compression options with ggplot2

半世苍凉 提交于 2019-12-01 18:40:06
问题 Is it possible to compress the file size of a figure using ggsave ? I have tried using the compression = "lzw" argument, but the file size remains the same. (Using R studio .98.501 OS-X Yosemite) My code: ggsave("Figure1.tiff", width = 14, height = 8, dpi=600, compression = "lzw") Is it possible to add a compression argument with ggsave? 回答1: UPDATE: I just tried this option with ggplot2 2.2.1 . If you save using the file ending ".tiff" , and specify compression = "lzw" , ( exactly as written

Why can't I use “CompactDatabase” in DAO.DBEngine.36 using VBscript?

给你一囗甜甜゛ 提交于 2019-12-01 18:17:53
问题 I'm trying to make a small VBScript that compacts a MS Access 2007 database file. The code I have is: Set acc2007 = CreateObject("DAO.DBEngine.36") acc2007.CompactDatabase "C:\test.accdb", "C:\test2.accdb", Nothing, Nothing, ";pwd=test" Set acc2007 = Nothing I'm getting this error when I run the three lines with "cscript test.vbs" from a 32-bit cmd.exe: C:\test.vbs(10, 1) DAO.DbEngine: Unrecognized database format 'C:\test.accdb'. The database was created with MS Access 2007, when I open it

“Linear dependence in the dictionary” exception in sklearns OMP

风格不统一 提交于 2019-12-01 18:16:15
I'm using sklearns OrthogonalMatchingPursuit to get a sparse coding of a signal using a dictionary learned by a KSVD algorithm . However, during the fit I get the following RuntimeWarning: /usr/local/lib/python2.7/dist-packages/sklearn/linear_model/omp.py:391: RuntimeWarning: Orthogonal matching pursuit ended prematurely due to linear dependence in the dictionary. The requested precision might not have been met. copy_X=copy_X, return_path=return_path) In those cases the results are indeed not satisfactory. I don't get the point of this warning as it is common in sparse coding to have an