compression

“Linear dependence in the dictionary” exception in sklearns OMP

爱⌒轻易说出口 提交于 2020-01-11 09:10:03
问题 I'm using sklearns OrthogonalMatchingPursuit to get a sparse coding of a signal using a dictionary learned by a KSVD algorithm. However, during the fit I get the following RuntimeWarning: /usr/local/lib/python2.7/dist-packages/sklearn/linear_model/omp.py:391: RuntimeWarning: Orthogonal matching pursuit ended prematurely due to linear dependence in the dictionary. The requested precision might not have been met. copy_X=copy_X, return_path=return_path) In those cases the results are indeed not

PHP - Compress Image to Meet File Size Limit

五迷三道 提交于 2020-01-11 07:55:27
问题 I have to upload image files that meet a max width dimension and max file size. I have the code that checks width size and resizes the image to meet the max image width. However, when I am saving the file I can set the quality imagejpeg( $imgObject , 'resized/50.jpg' , 50 ); //save image and set quality What I would like to do is avoid setting a standard quality, as the images being submitted vary highly from quality and may be low to begin with. The quality of the image should be set as high

Android, Compressing an image

自作多情 提交于 2020-01-11 06:18:06
问题 I am sending an image over the network via wifi or the mobile network to be stored in a server and retrieved again. I've done that but due to the size of images taken by the camera it's making my app slow, just to point out I'm opening the gallery and taking the pictures from there and not taking the picture directly from the app. I have noticed that images from whatsapp that have been taken from the camera and gallery have been compressed to approx. 100kb. At the moment my code takes a file

Compressing a string using GZIPOutputStream

随声附和 提交于 2020-01-11 05:54:52
问题 I want to zip my string values. These string values should be same as .net zipped strings. I wrote Decompress method and when I send a .net zipped string to it, it works correctly. But the Compress method does not work correctly. public static String Decompress(String zipText) throws IOException { int size = 0; byte[] gzipBuff = Base64.decode(zipText); ByteArrayInputStream memstream = new ByteArrayInputStream(gzipBuff, 4, gzipBuff.length - 4); GZIPInputStream gzin = new GZIPInputStream

Compressing & Decompressing 7z file in java

Deadly 提交于 2020-01-10 17:30:49
问题 I want to compress a file into zip, rar and 7z format using java code. Also I want to decompress these files at a specified location. Can anyone please tell me how to compress and decompress files using 7-zip in java? 回答1: I have used : sevenzipjbinding.jar sevenzipjbinding-Allplatforms.jar I am now able to decompress files using these jars. Try this link for decompression: http://sourceforge.net/projects/sevenzipjbind/forums/forum/757964/topic/3844899 回答2: SevenZipBinding is great for

Why do the md5 hashes of two tarballs of the same file differ?

拈花ヽ惹草 提交于 2020-01-10 14:14:08
问题 I can run: echo "asdf" > testfile tar czf a.tar.gz testfile tar czf b.tar.gz testfile md5sum *.tar.gz and it turns out that a.tar.gz and b.tar.gz have different md5 hashes. It's true that they're different, which diff -u a.tar.gz b.tar.gz confirms. What additional flags do I need to pass in to tar so that its output is consistent over time with the same input? 回答1: tar czf outfile infiles is equivalent to tar cf - infiles | gzip > outfile The reason the files are different is because gzip

optimizing byte-pair encoding

泪湿孤枕 提交于 2020-01-10 10:53:06
问题 Noticing that byte-pair encoding (BPE) is sorely lacking from the large text compression benchmark, I very quickly made a trivial literal implementation of it. The compression ratio - considering that there is no further processing, e.g. no Huffman or arithmetic encoding - is surprisingly good. The runtime of my trivial implementation was less than stellar, however. How can this be optimized? Is it possible to do it in a single pass? 回答1: This is a summary of my progress so far: Googling

optimizing byte-pair encoding

廉价感情. 提交于 2020-01-10 10:52:49
问题 Noticing that byte-pair encoding (BPE) is sorely lacking from the large text compression benchmark, I very quickly made a trivial literal implementation of it. The compression ratio - considering that there is no further processing, e.g. no Huffman or arithmetic encoding - is surprisingly good. The runtime of my trivial implementation was less than stellar, however. How can this be optimized? Is it possible to do it in a single pass? 回答1: This is a summary of my progress so far: Googling

optimizing byte-pair encoding

二次信任 提交于 2020-01-10 10:52:36
问题 Noticing that byte-pair encoding (BPE) is sorely lacking from the large text compression benchmark, I very quickly made a trivial literal implementation of it. The compression ratio - considering that there is no further processing, e.g. no Huffman or arithmetic encoding - is surprisingly good. The runtime of my trivial implementation was less than stellar, however. How can this be optimized? Is it possible to do it in a single pass? 回答1: This is a summary of my progress so far: Googling

Encoding binary data within XML: Are there better alternatives than base64?

笑着哭i 提交于 2020-01-10 02:53:33
问题 I want to encode and decode binary data within an XML file (with Python, but whatever). I have to face the fact that an XML tag content has illegal characters. The only allowed ones are described in XML specs: Char ::= #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF] Which means that the unallowed are: 29 Unicode control characters are illegal (0x00 - 0x20) ie ( 000xxxxx ) except 0x09, 0x0A, 0x0D Any Unicode character representation above 2 bytes (UTF-16+) is illegal (U