How to preprocess and decompress .gz file on Azure Data Lake store?

陌路散爱 提交于 2019-11-29 16:48:54

In addition, doing automatic compression on OUTPUT is on the roadmap. Please add your vote to https://feedback.azure.com/forums/327234-data-lake/suggestions/13418367-support-gzip-on-output-as-well

According to the main EXTRACT article, U-SQL EXTRACT method automatically recognises the GZip format, so you don't need to do anything special.

Extraction from compressed data

In general, the files are passed as is to the UDO. One exception is that EXTRACT will recognize GZip compressed files with the file extension .gz and automatically decompress them as part of the extraction process. The actual UDO will see the uncompressed data. For any other compression scheme, users will have to write their own custom extractor. Note that U-SQL has an upper limit of 4GB on a GZip compressed file. If you apply your EXTRACT expression to a file larger than this limit, the error E_RUNTIME_USER_MAXCOMPRESSEDFILESIZE is being raised during the compilation of the job.

It looks like this feature has been available for a while, but was updated in Nov 2016 to introduce the 4GB limit. See here.

Here is a simple example which converts a gzipped, comma-separated file to pipe-separated:

DECLARE @file1 string = @"/input/input.csv.gz";

@file =
    EXTRACT col1 string,
            col2 string,
            col3 string
    FROM @file1
    USING Extractors.Csv(silent : true);


@output =
    SELECT *
    FROM @file;


OUTPUT @output
TO "/output/output.txt"
ORDER BY col1
//FETCH 500 ROWS
USING Outputters.Text(quoting : false, delimiter : '|');
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!