compression

.NET compression of XML to store in SQL Server database

旧时模样 提交于 2019-11-29 11:12:26
Currently our .NET application constructs XML data in memory that we persist to a SQL Server database. The XElement object is converted to a string using ToString() and then stored in a varchar(MAX) column in the DB. We dind't want to use the SQL XML datatype as we didn't need any validation and SQL doesn't need to query the XML at any stage. Although this implementation works fine, we want to reduce the size of the database by compressing the XML before storing it, and decompressing it after retrieving it. Does anyone have any sample code for compressing an XElement object (and decompressing

Kafka message codec - compress and decompress

纵然是瞬间 提交于 2019-11-29 11:09:18
When using kafka , I can set a codec by setting the kafka.compression.codec property of my kafka producer. Suppose I use snappy compression in my producer, when consuming the messages from kafka using some kafka-consumer, should I do something to decode the data from snappy or is it some built-in feature of kafka consumer? In the relevant documentation I could not find any property that relates to encoding in kafka consumer (it only relates to the producer). Can someone clear this? As per my understanding goes the de-compression is taken care by the Consumer it self. As mentioned in their

Can a program output a copy of itself

心已入冬 提交于 2019-11-29 11:05:10
问题 I think this might be a classic question but I am not aware of an answer. Can a program output a copy of itself, and, if so, is there a short program that does this? I do not accept the "empty program" as an answer, and I do not accept programs that have access to there own source code. Rather, I am thinking something like this: int main(int argc, char** argv){ printf("int main(argc, char** argv){ printf... but I do not know how to continue... 回答1: Yes. A programme that can make a copy of

GZipStream doesn't detect corrupt data (even CRC32 passes)?

☆樱花仙子☆ 提交于 2019-11-29 11:00:22
I'm using GZipStream to compress / decompress data. I chose this over DeflateStream since the documentation states that GZipStream also adds a CRC to detect corrupt data, which is another feature I wanted. My "positive" unit tests are working well in that I can compress some data, save the compressed byte array and then successfully decompress it again. The .NET GZipStream compress and decompress problem post helped me realize that I needed to close the GZipStream before accessing the compressed or decompressed data. Next, I continued to write a "negative" unit test to be sure corrupt data

Java: Error creating a GZIPInputStream: Not in GZIP format

社会主义新天地 提交于 2019-11-29 10:26:54
I am trying to use the following Java code to compress and uncompress a String. But the line that creates a new GZipInputStream object out of a new ByteArrayInputStream object throws a "java.util.zip.ZipException: Not in GZIP format" exception. Does anyone know how to solve this? String orig = "............."; // compress it ByteArrayOutputStream baostream = new ByteArrayOutputStream(); OutputStream outStream = new GZIPOutputStream(baostream); outStream.write(orig.getBytes()); outStream.close(); String compressedStr = baostream.toString(); // uncompress it InputStream inStream = new

PHP+Imagick - PNG Compression

浪尽此生 提交于 2019-11-29 10:22:15
How do I efficiently compress a PNG? In my case, the images are small grayscale images with transparency. Currently I'm playing with this: // ... $im->setImageFormat('png'); $im->setImageColorspace(\Imagick::COLORSPACE_GRAY); $im->setImageCompression(\Imagick::COMPRESSION_LZW); $im->setImageCompressionQuality(9); $im->stripImage(); $im->writeImage($url_t); As Imagick doesn't offer COMPRESSION_PNG , I've tried LZW but there's almost no change in the filesize (usually it's even bigger than before). If I open the image in GIMP and simply save it, the filesize gets drastically reduced (e.g. 11,341

Why JPEG compression processes image by 8x8 blocks?

房东的猫 提交于 2019-11-29 10:22:06
Why JPEG compression processes image by 8x8 blocks instead of applying Discrete Cosine Transform to the whole image? 8 X 8 was chosen after numerous experiments with other sizes. The conclusions of experiments are: 1. Any matrices of sizes greater than 8 X 8 are harder to do mathematical operations (like transforms etc..) or not supported by hardware or take longer time. 2. Any matrices of sizes less than 8 X 8 dont have enough information to continue along with the pipeline. It results in bad quality of the compressed image. Read, my blog, http://nboddula.blogspot.com/2013/05/image

Javascript client-data compression

廉价感情. 提交于 2019-11-29 09:55:11
I am trying to develop a paint brush application thru processingjs. This API has function loadPixels() that will load the RGB values in to the array. Now i want to store the array in the server db. The problem is the size of the array, when i convert to a string the size is 5 MB. Is the best solution is to do compression at javascript level? How to do it? See http://rosettacode.org/wiki/LZW_compression#JavaScript for an LZW compression example. It works best on longer strings with repeated patterns. From the Wikipedia article on LZW: A dictionary is initialized to contain the single-character

How to compress a directory with libbz2 in C++

流过昼夜 提交于 2019-11-29 09:49:23
I need to create a tarball of a directory and then compress it with bz2 in C++. Is there any decent tutorial on using libtar and libbz2? Matthew Flaschen Okay, I worked up a quick example for you. No error checking and various arbitrary decisions, but it works. libbzip2 has fairly good web documentation . libtar, not so much, but there are manpages in the package, an example, and a documented header file. The below can be built with g++ C++TarBz2.cpp -ltar -lbz2 -o C++TarBz2.exe : #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <stdlib.h> #include <libtar.h> #include

Compressing A Series of JSON Objects While Maintaining Serial Reading?

给你一囗甜甜゛ 提交于 2019-11-29 09:40:33
问题 I have a bunch of json objects that I need to compress as it's eating too much disk space, approximately 20 gigs worth for a few million of them. Ideally what I'd like to do is compress each individually and then when I need to read them, just iteratively load and decompress each one. I tried doing this by creating a text file with each line being a compressed json object via zlib , but this is failing with a decompress error due to a truncated stream , which I believe is due to the