compression

Excluding directory when creating a .tar.gz file

痴心易碎 提交于 2019-12-04 07:25:32
问题 I have a /public_html/ folder, in that folder there's a /tmp/ folder that has like 70gb of files I don't really need. Now I am trying to create a .tar.gz of /public_html/ excluding /tmp/ This is the command I ran: tar -pczf MyBackup.tar.gz /home/user/public_html/ --exclude "/home/user/public_html/tmp/" The tar is still being created, and by doing an ls -sh I can see that MyBackup.tar.gz already has about 30gb, and I know for sure that /public_html/ without /tmp/ doesn't have more than 1GB of

Tool for lossless image compression [closed]

我的梦境 提交于 2019-12-04 07:20:51
Closed. This question is off-topic. It is not currently accepting answers. Learn more . Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 years ago . Running Google Page Speed on a public site , I saw some suggestions by the tool like the following : Losslessly compressing http://g-ecx.images-amazon.com/images/G/01/electronics/detail-page/Acer-120x120._V137848950_.gi could save 4.8KiB (26% reduction) and they also provide a link to the optimized content.But they do it on a per image basis. I saw some significant reduction on file sizes after

Duplicate text-finding

时间秒杀一切 提交于 2019-12-04 07:14:00
My main problem is trying to find a suitable solution to automatically turning this, for example: d+c+d+f+d+c+d+f+d+c+d+f+d+c+d+f+ into this: [d+c+d+f+]4 i.e. Finding duplicates next to each other, then making a shorter "loop" out of these duplicates. So far I have found no suitable solution to this, and I look forward to a response. P.S. To avoid confusion, the aforementioned sample is not the only thing that needs "looping", it differs from file to file. Oh, and this is intended for a C++ or C# program, either is fine, though I'm open to any other suggestions as well. Also, the main idea is

YSlow gives F grade to files compressed with mod_deflate

 ̄綄美尐妖づ 提交于 2019-12-04 06:50:47
I'm using mod_deflate on Apache 2.2 and the compression level is set to 9. I've fine tuned every possible aspects of the site based on the recommendations of YSlow (v2) and have managed to get an overall A grade (Total Score: 91) as well as on all categories except for: Make fewer HTTP requests ( Grade C - I'm still working on further unification of images) Compress components with gzip ( Grade F ) YSlow still reports back with a F and tells me to use gzip on my CSS and JS files. Here's a screenshot of the YSlow report (the domain has been blurred out for the sake of privacy) : However, sites

compress binaries in SVN?

不打扰是莪最后的温柔 提交于 2019-12-04 06:48:40
问题 I have written a script to compress and uncompress binary files of a selected directory (and the sub-directories). I need to activate the script before I commit files to SVN. Is there a way to use the pre-commit hook to execute the script? and if so, how do I give to the he script the root directory (so it would scan the sub-folders and compress) ? and what should I write in the hook to execute the script? The same thing I need to do when I CHECK OUT files. I need to execute a script. Again I

Multi-part gzip file random access (in Java)

耗尽温柔 提交于 2019-12-04 06:22:56
This may fall in the realm of "not really feasible" or "not really worth the effort" but here goes. I'm trying to randomly access records stored inside a multi-part gzip file. Specifically, the files I'm interested in are compressed Heretrix Arc files. (In case you aren't familiar with multi-part gzip files, the gzip spec allows multiple gzip streams to be concatenated in a single gzip file. They do not share any dictionary information, it is simple binary appending.) I'm thinking it should be possible to do this by seeking to a certain offset within the file, then scan for the gzip magic

LZ77 compression scratch

时光怂恿深爱的人放手 提交于 2019-12-04 06:08:25
问题 I'm building a LZ77 compression. I have read the whole file as a single string and tried to compress it. Is there any other way to do it? I'll attach my code below do tell if there is any changes to be made so that the program does compression very quickly even if it reads a big file.. import fileinput class Assign: def pattern(self, data): self.skip = [] self.m = len(data) for k in range(256): self.skip.append(self.m) for k in range(self.m - 1): self.skip[ord(data[k])] = self.m - k - 1 self

Can we ZIP/Unzip files larger than 4GB using .Net 4.5 libraries?

寵の児 提交于 2019-12-04 06:02:14
问题 We have an original zip library which uses the SharpZipLib and .NET 2.0. We need to research what is now possible in .NET 4.5 using System.IO.Compression and System.IO.Compression.ZipArchive. Should we use .NET 4.5 libraries or use a 3rd party open source library? Also, we need to be able to zip/unzip files larger than 4GB. Samples, blogs, any help is welcomed. Thank you 回答1: 4GB is the maximum addressible size for a 32-bit pointer, hence it will not work for larger file. But you can try

JavaScript compression

▼魔方 西西 提交于 2019-12-04 06:00:08
问题 I look for tool which can compress JavaScript source code. I found some web tools which only deletes whitespace chars? But maybe exist better tool which can compress user's function names, field name, deletes unused fields, others. 回答1: A tool often used to compress JS code is the YUI Compressor. Considering there is this option : --nomunge Minify only. Do not obfuscate local symbols. It should be able to do what you asked. And here is an article about it : Introducing the YUI Compressor.

C/C++ Packing and Compression [closed]

耗尽温柔 提交于 2019-12-04 05:45:59
I'm working on a commercial project that requires a couple of files to be bundled (packed) into an archive and then compressed. Right now we have zlib in our utility library, but it doesn't look like zlib has the functionality to compress multiple files into one archive. Does anyone know of free libraries I'd be able to use for this? Amber Perhaps libtar ? Also under a BSD license. 7Zip has a full SDK for several languages including C and C++. The compression is extremely good, albeit not very fast. The code is licensed under the LGPL. You could use libzip - it's under a BSD-like licence so it