compression

C/C++ Packing and Compression [closed]

本秂侑毒 提交于 2020-01-12 21:04:10
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 5 years ago . I'm working on a commercial project that requires a couple of files to be bundled (packed) into an archive and then compressed. Right now we have zlib in our utility library, but it doesn't look like zlib has the functionality to compress multiple files into one archive. Does

Importing Python modules from a select location

[亡魂溺海] 提交于 2020-01-12 15:54:06
问题 Let’s say I had three scripts. Main.py (has all imports), 1.py (random script), 2.py (random script). pyinstaller -F --onedir Main.py (80mb) pyinstaller -F --onedir 1.py (80mb) pyinstaller -F --onedir 2.py (80mb) This creates 3 folders I then copy 1.exe and 2.exe to Main folder with all dependencies and this runs fine. Two issues are present: The issue is the size. One file reduces this to 30mb, one folder keeps it at 80mb More importantly, the exe’s are unable to leave that folder. I’ve had

Using unrar library - extracting files into a filestream buffer

自闭症网瘾萝莉.ら 提交于 2020-01-11 20:04:43
问题 What I need is to be able to extract the files in a .rar file into streams. I'm creating a test case to get a sense of how to use unrar source. I've been searching and tinkering for a while, but I can't figure out how to use the library. I'm surprised I can't even find documentation or a tutorial for it, considering how common .rar archives are. I've made a bit of progress on my own, but it doesn't always work. Certain files are extracted properly. Other files are jumbled up for some reason

Using unrar library - extracting files into a filestream buffer

帅比萌擦擦* 提交于 2020-01-11 20:01:10
问题 What I need is to be able to extract the files in a .rar file into streams. I'm creating a test case to get a sense of how to use unrar source. I've been searching and tinkering for a while, but I can't figure out how to use the library. I'm surprised I can't even find documentation or a tutorial for it, considering how common .rar archives are. I've made a bit of progress on my own, but it doesn't always work. Certain files are extracted properly. Other files are jumbled up for some reason

Compress files while reading data from STDIN

拜拜、爱过 提交于 2020-01-11 15:04:09
问题 Is it possible to compress (create a compressed archive) data while reading from stdin on Linux? 回答1: Yes, use gzip for this. The best way is to read data as input and redirect the compressed to output file i.e. cat test.csv | gzip > test.csv.gz cat test.csv will send the data as stdout and using pipe-sign gzip will read that data as stdin. Make sure to redirect the gzip output to some file as compressed data will not be written to the terminal. 回答2: Yes, gzip will let you do this. If you

Create a tar.xz in one command

ぐ巨炮叔叔 提交于 2020-01-11 14:47:48
问题 I am trying to create a .tar.xz compressed archive in one command. What is the specific syntax for that? I have tried tar cf - file | xz file.tar.xz , but that does not work. 回答1: Use the -J compression option for xz . And remember to man tar :) tar cfJ <archive.tar.xz> <files> Edit 2015-08-10: If you're passing the arguments to tar with dashes (ex: tar -cf as opposed to tar cf ), then the -f option must come last , since it specifies the filename (thanks to @A-B-B for pointing that out!). In

Any ideas why patterned SVG file is showing up blank in the browser?

怎甘沉沦 提交于 2020-01-11 14:19:31
问题 I have an svg file that I created in Illustrator that consists of a pattern - it was made using the swatch tool. When I try and load it locally it shows up blank in the browser. Here is the file if you want to test it: http://d.pr/ZvhV 回答1: If you find that it works in Firefox, IE or Edge but not Chrome, this was my issue too. I fixed it by opening the .SVG file in a text editor, and everywhere I saw this tag: xlink:href="data:img/png;base64 I replaced it with xlink:href="data:image/png

Prevent Apache from chunking gzipped content

烂漫一生 提交于 2020-01-11 10:12:19
问题 When using mod_deflate in Apache2, Apache will chunk gzipped content, setting the Transfer-encoding: chunked header. While this results in a faster download time, I cannot display a progress bar. If I handle the compression myself in PHP, I can gzip it completely first and set the Content-length header, so that I can display a progress bar to the user. Is there any setting that would change Apache's default behavior, and have Apache set a Content-length header instead of chunking the response

Prevent Apache from chunking gzipped content

生来就可爱ヽ(ⅴ<●) 提交于 2020-01-11 10:12:06
问题 When using mod_deflate in Apache2, Apache will chunk gzipped content, setting the Transfer-encoding: chunked header. While this results in a faster download time, I cannot display a progress bar. If I handle the compression myself in PHP, I can gzip it completely first and set the Content-length header, so that I can display a progress bar to the user. Is there any setting that would change Apache's default behavior, and have Apache set a Content-length header instead of chunking the response

“Linear dependence in the dictionary” exception in sklearns OMP

試著忘記壹切 提交于 2020-01-11 09:11:08
问题 I'm using sklearns OrthogonalMatchingPursuit to get a sparse coding of a signal using a dictionary learned by a KSVD algorithm. However, during the fit I get the following RuntimeWarning: /usr/local/lib/python2.7/dist-packages/sklearn/linear_model/omp.py:391: RuntimeWarning: Orthogonal matching pursuit ended prematurely due to linear dependence in the dictionary. The requested precision might not have been met. copy_X=copy_X, return_path=return_path) In those cases the results are indeed not