compression

WIA: no compression when saving files

∥☆過路亽.° 提交于 2019-12-07 00:08:30
I'm using WIA for scanning images and noticed, that images aren't stored efficiently as SaveFile apparently doesn't make use of compression. Currently I'm using this code: WIA.ImageFile img = (WIA.ImageFile)item.Transfer(WIA.FormatID.wiaFormatPNG); img.SaveFile(path); Is there a way to use WIA for compression, or how else could I save the image using compression? EDIT: Using the following code I was able to decrease file size from 25 to 10 MB. WIA.ImageFile img = (WIA.ImageFile)item.Transfer(WIA.FormatID.wiaFormatPNG); WIA.ImageProcess ImageProcess1 = new WIA.ImageProcessClass(); System.Object

WIA: no compression when saving files

筅森魡賤 提交于 2019-12-07 00:08:22
I'm using WIA for scanning images and noticed, that images aren't stored efficiently as SaveFile apparently doesn't make use of compression. Currently I'm using this code: WIA.ImageFile img = (WIA.ImageFile)item.Transfer(WIA.FormatID.wiaFormatPNG); img.SaveFile(path); Is there a way to use WIA for compression, or how else could I save the image using compression? EDIT: Using the following code I was able to decrease file size from 25 to 10 MB. WIA.ImageFile img = (WIA.ImageFile)item.Transfer(WIA.FormatID.wiaFormatPNG); WIA.ImageProcess ImageProcess1 = new WIA.ImageProcessClass(); System.Object

c# saving very large bitmaps as jpegs (or any other compressed format)

我是研究僧i 提交于 2019-12-06 21:36:59
问题 I am currently handling very large images, which are basically generated by stitching together many smaller images (e.g. panorama or photo mosaic software). In order to avoid out-of-memory exceptions (in memory are only "maps" of how to arrange the smaller images), I wrote some code saving these images line by line as bitmaps using BinaryWriter and LockBits. So far, so good. The problem is now that I would like to save these images as Jpegs (or PNGs) as well. Since I am pretty new to c# I can

Fast PDF Compression Library for .NET

陌路散爱 提交于 2019-12-06 21:05:45
问题 I need a fast PDF Compression library for .NET that will allow me to run 10 concurrent threads each compressing a separate PDF file to around 10% of its original size. Any suggestions? (I have already tried out the product from neeviaPDF.com. It is not as fast as I need.) 回答1: The company's website shows three examples - one compresses a pdf from 9.1mb to 133kb. Opening them up with Notepad shows a single 2500x3000 mostly black image compressed with FlateDecode converted to the same size

Fastest real time decompression algorithm

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-06 19:43:17
问题 I'm looking for an algorithm to decompress chunks of data (1k-30k) in real time with minimal overhead. Compression should preferably be fast but isn't as important as decompression speed. From what I could gather LZO1X would be the fastest one. Have I missed anything? Ideally the algorithm is not under GPL. 回答1: lz4 is what you're looking for here. LZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core, scalable with multi-cores CPU. It features an extremely

Compress file on S3

感情迁移 提交于 2019-12-06 18:21:57
问题 I have a 17.7GB file on S3. It was generated as the output of a Hive query, and it isn't compressed. I know that by compressing it, it'll be about 2.2GB (gzip). How can I download this file locally as quickly as possible when transfer is the bottleneck (250kB/s). I've not found any straightforward way to compress the file on S3, or enable compression on transfer in s3cmd, boto, or related tools. 回答1: S3 does not support stream compression nor is it possible to compress the uploaded file

Fiddler doesn't decompress gzip responses

我的梦境 提交于 2019-12-06 16:44:49
问题 I use Fiddler to debug my application. Whenever the response is compressed by server, instead of decompressed response, Fiddler shows unreadable binary data: /* Response to my request (POST) */ HTTP/1.1 200 OK Server: xyz.com Date: Tue, 07 Jun 2011 22:22:21 GMT Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.3.3 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0

Renaming ICSharpCode.SharpZipLib.dll

喜夏-厌秋 提交于 2019-12-06 16:39:21
well I am having a problem renaming the ICSharpCode.SharpZipLib.dll file to anythig else. I am trying to shorten the file name. I reference the assembly in the project, but when the program reaches the statements where I use the library. It spawns an error that it could not find the assembly or file 'ICSharpCode.SharpZipLib'. When I change the file name back to ICSharpCode.SharpZipLib.dll the application works noramally. So, is there any way to change the file name. Also, am I allowed to change it without violating the license (I am going to use it in a commercial application). Thanks. You

Native Swift implementation of DEFLATE (unzip) algorithm [closed]

回眸只為那壹抹淺笑 提交于 2019-12-06 16:19:53
Closed. This question is off-topic . It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 years ago . I have some data that has been compressed with the DEFLATE algorithm which I believe is basically just saying its zipped up. I'm writing a swift app and I was interested in figuring out if there is a native pure swift (2.0) implementation of the unzip algorithm. I need to implement this in a swift dynamic framework, and as such it would be preferable if i didn't have to use Objective-c code as that requires me

Python and zlib: Terribly slow decompressing concatenated streams

可紊 提交于 2019-12-06 15:40:11
I've been supplied with a zipped file containing multiple individual streams of compressed XML. The compressed file is 833 mb. If I try to decompress it as a single object, I only get the first stream (about 19 kb). I've modified the following code supplied as a answer to an older question to decompress each stream and write it to a file: import zlib outfile = open('output.xml', 'w') def zipstreams(filename): """Return all zip streams and their positions in file.""" with open(filename, 'rb') as fh: data = fh.read() i = 0 print "got it" while i < len(data): try: zo = zlib.decompressobj() dat